00:00:00.001 Started by upstream project "autotest-per-patch" build number 126137 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.244 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.257 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.270 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.270 > git config core.sparsecheckout # timeout=10 00:00:08.284 > git read-tree -mu HEAD # timeout=10 00:00:08.302 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.321 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.322 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.431 [Pipeline] Start of Pipeline 00:00:08.443 [Pipeline] library 00:00:08.445 Loading library shm_lib@master 00:00:08.445 Library shm_lib@master is cached. Copying from home. 00:00:08.464 [Pipeline] node 00:00:08.493 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.495 [Pipeline] { 00:00:08.509 [Pipeline] catchError 00:00:08.511 [Pipeline] { 00:00:08.525 [Pipeline] wrap 00:00:08.534 [Pipeline] { 00:00:08.540 [Pipeline] stage 00:00:08.542 [Pipeline] { (Prologue) 00:00:08.714 [Pipeline] sh 00:00:09.003 + logger -p user.info -t JENKINS-CI 00:00:09.022 [Pipeline] echo 00:00:09.024 Node: GP8 00:00:09.032 [Pipeline] sh 00:00:09.330 [Pipeline] setCustomBuildProperty 00:00:09.342 [Pipeline] echo 00:00:09.344 Cleanup processes 00:00:09.350 [Pipeline] sh 00:00:09.635 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.635 912214 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.652 [Pipeline] sh 00:00:09.941 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.941 ++ grep -v 'sudo pgrep' 00:00:09.941 ++ awk '{print $1}' 00:00:09.941 + sudo kill -9 00:00:09.941 + true 00:00:09.960 [Pipeline] cleanWs 00:00:09.973 [WS-CLEANUP] Deleting project workspace... 00:00:09.973 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.980 [WS-CLEANUP] done 00:00:09.985 [Pipeline] setCustomBuildProperty 00:00:10.004 [Pipeline] sh 00:00:10.288 + sudo git config --global --replace-all safe.directory '*' 00:00:10.383 [Pipeline] httpRequest 00:00:10.438 [Pipeline] echo 00:00:10.439 Sorcerer 10.211.164.101 is alive 00:00:10.448 [Pipeline] httpRequest 00:00:10.453 HttpMethod: GET 00:00:10.454 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.454 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.465 Response Code: HTTP/1.1 200 OK 00:00:10.465 Success: Status code 200 is in the accepted range: 200,404 00:00:10.466 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.829 [Pipeline] sh 00:00:14.111 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.127 [Pipeline] httpRequest 00:00:14.150 [Pipeline] echo 00:00:14.151 Sorcerer 10.211.164.101 is alive 00:00:14.159 [Pipeline] httpRequest 00:00:14.163 HttpMethod: GET 00:00:14.164 URL: http://10.211.164.101/packages/spdk_d4b4edb37946e80fa08787e705cf918d76f26f9f.tar.gz 00:00:14.165 Sending request to url: http://10.211.164.101/packages/spdk_d4b4edb37946e80fa08787e705cf918d76f26f9f.tar.gz 00:00:14.174 Response Code: HTTP/1.1 200 OK 00:00:14.174 Success: Status code 200 is in the accepted range: 200,404 00:00:14.175 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d4b4edb37946e80fa08787e705cf918d76f26f9f.tar.gz 00:00:58.101 [Pipeline] sh 00:00:58.390 + tar --no-same-owner -xf spdk_d4b4edb37946e80fa08787e705cf918d76f26f9f.tar.gz 00:01:00.934 [Pipeline] sh 00:01:01.218 + git -C spdk log --oneline -n5 00:01:01.218 d4b4edb37 accel: introduce tasks in sequence limit 00:01:01.218 a0b7842f9 util: rm auto size detect from SPDK_GET_FIELD 00:01:01.218 719d03c6a sock/uring: only register net impl if supported 00:01:01.218 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:01.218 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:01.230 [Pipeline] } 00:01:01.247 [Pipeline] // stage 00:01:01.257 [Pipeline] stage 00:01:01.260 [Pipeline] { (Prepare) 00:01:01.280 [Pipeline] writeFile 00:01:01.299 [Pipeline] sh 00:01:01.576 + logger -p user.info -t JENKINS-CI 00:01:01.589 [Pipeline] sh 00:01:01.872 + logger -p user.info -t JENKINS-CI 00:01:01.886 [Pipeline] sh 00:01:02.170 + cat autorun-spdk.conf 00:01:02.170 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.170 SPDK_TEST_NVMF=1 00:01:02.170 SPDK_TEST_NVME_CLI=1 00:01:02.170 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.170 SPDK_TEST_NVMF_NICS=e810 00:01:02.170 SPDK_TEST_VFIOUSER=1 00:01:02.170 SPDK_RUN_UBSAN=1 00:01:02.170 NET_TYPE=phy 00:01:02.177 RUN_NIGHTLY=0 00:01:02.182 [Pipeline] readFile 00:01:02.206 [Pipeline] withEnv 00:01:02.208 [Pipeline] { 00:01:02.221 [Pipeline] sh 00:01:02.505 + set -ex 00:01:02.505 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:02.505 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.505 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.505 ++ SPDK_TEST_NVMF=1 00:01:02.505 ++ SPDK_TEST_NVME_CLI=1 00:01:02.505 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.505 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.505 ++ SPDK_TEST_VFIOUSER=1 00:01:02.505 ++ SPDK_RUN_UBSAN=1 00:01:02.505 ++ NET_TYPE=phy 00:01:02.505 ++ RUN_NIGHTLY=0 00:01:02.505 + case $SPDK_TEST_NVMF_NICS in 00:01:02.505 + DRIVERS=ice 00:01:02.505 + [[ tcp == \r\d\m\a ]] 00:01:02.505 + [[ -n ice ]] 00:01:02.505 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.505 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.505 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:02.505 rmmod: ERROR: Module irdma is not currently loaded 00:01:02.505 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.505 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.505 + true 00:01:02.505 + for D in $DRIVERS 00:01:02.505 + sudo modprobe ice 00:01:02.505 + exit 0 00:01:02.513 [Pipeline] } 00:01:02.533 [Pipeline] // withEnv 00:01:02.540 [Pipeline] } 00:01:02.558 [Pipeline] // stage 00:01:02.569 [Pipeline] catchError 00:01:02.571 [Pipeline] { 00:01:02.586 [Pipeline] timeout 00:01:02.586 Timeout set to expire in 50 min 00:01:02.588 [Pipeline] { 00:01:02.604 [Pipeline] stage 00:01:02.605 [Pipeline] { (Tests) 00:01:02.622 [Pipeline] sh 00:01:02.907 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.907 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.907 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.907 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:02.907 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.907 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.907 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:02.907 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.907 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.907 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.907 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:02.907 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.907 + source /etc/os-release 00:01:02.907 ++ NAME='Fedora Linux' 00:01:02.907 ++ VERSION='38 (Cloud Edition)' 00:01:02.907 ++ ID=fedora 00:01:02.907 ++ VERSION_ID=38 00:01:02.907 ++ VERSION_CODENAME= 00:01:02.907 ++ PLATFORM_ID=platform:f38 00:01:02.907 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:02.907 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:02.907 ++ LOGO=fedora-logo-icon 00:01:02.907 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:02.907 ++ HOME_URL=https://fedoraproject.org/ 00:01:02.907 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:02.907 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:02.907 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:02.907 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:02.907 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:02.907 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:02.907 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:02.907 ++ SUPPORT_END=2024-05-14 00:01:02.907 ++ VARIANT='Cloud Edition' 00:01:02.907 ++ VARIANT_ID=cloud 00:01:02.907 + uname -a 00:01:02.907 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:02.907 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:03.845 Hugepages 00:01:03.845 node hugesize free / total 00:01:03.845 node0 1048576kB 0 / 0 00:01:04.103 node0 2048kB 0 / 0 00:01:04.103 node1 1048576kB 0 / 0 00:01:04.103 node1 2048kB 0 / 0 00:01:04.103 00:01:04.103 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.103 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:04.103 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:04.103 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:04.103 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:04.103 + rm -f /tmp/spdk-ld-path 00:01:04.103 + source autorun-spdk.conf 00:01:04.103 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.103 ++ SPDK_TEST_NVMF=1 00:01:04.103 ++ SPDK_TEST_NVME_CLI=1 00:01:04.103 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.103 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.103 ++ SPDK_TEST_VFIOUSER=1 00:01:04.103 ++ SPDK_RUN_UBSAN=1 00:01:04.103 ++ NET_TYPE=phy 00:01:04.103 ++ RUN_NIGHTLY=0 00:01:04.103 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.103 + [[ -n '' ]] 00:01:04.103 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.103 + for M in /var/spdk/build-*-manifest.txt 00:01:04.103 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.103 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.103 + for M in /var/spdk/build-*-manifest.txt 00:01:04.103 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.103 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.103 ++ uname 00:01:04.103 + [[ Linux == \L\i\n\u\x ]] 00:01:04.103 + sudo dmesg -T 00:01:04.103 + sudo dmesg --clear 00:01:04.103 + dmesg_pid=912893 00:01:04.103 + [[ Fedora Linux == FreeBSD ]] 00:01:04.103 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.103 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.103 + sudo dmesg -Tw 00:01:04.103 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.103 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.103 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.103 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.103 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.103 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.103 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.103 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.103 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.103 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.103 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.103 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.103 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.103 Test configuration: 00:01:04.103 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.103 SPDK_TEST_NVMF=1 00:01:04.103 SPDK_TEST_NVME_CLI=1 00:01:04.103 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.103 SPDK_TEST_NVMF_NICS=e810 00:01:04.103 SPDK_TEST_VFIOUSER=1 00:01:04.103 SPDK_RUN_UBSAN=1 00:01:04.103 NET_TYPE=phy 00:01:04.103 RUN_NIGHTLY=0 16:50:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:04.103 16:50:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:04.103 16:50:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:04.103 16:50:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:04.103 16:50:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.103 16:50:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.104 16:50:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.104 16:50:03 -- paths/export.sh@5 -- $ export PATH 00:01:04.104 16:50:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.362 16:50:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:04.362 16:50:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:04.362 16:50:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720795803.XXXXXX 00:01:04.362 16:50:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720795803.4uB8iz 00:01:04.362 16:50:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:04.362 16:50:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:04.362 16:50:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:04.362 16:50:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:04.362 16:50:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:04.362 16:50:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:04.362 16:50:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:04.362 16:50:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.362 16:50:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:04.362 16:50:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:04.362 16:50:03 -- pm/common@17 -- $ local monitor 00:01:04.362 16:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.362 16:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.362 16:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.362 16:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.362 16:50:03 -- pm/common@21 -- $ date +%s 00:01:04.362 16:50:03 -- pm/common@21 -- $ date +%s 00:01:04.362 16:50:03 -- pm/common@25 -- $ sleep 1 00:01:04.362 16:50:03 -- pm/common@21 -- $ date +%s 00:01:04.362 16:50:03 -- pm/common@21 -- $ date +%s 00:01:04.362 16:50:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720795803 00:01:04.362 16:50:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720795803 00:01:04.362 16:50:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720795803 00:01:04.362 16:50:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720795803 00:01:04.362 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720795803_collect-vmstat.pm.log 00:01:04.362 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720795803_collect-cpu-load.pm.log 00:01:04.362 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720795803_collect-cpu-temp.pm.log 00:01:04.362 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720795803_collect-bmc-pm.bmc.pm.log 00:01:05.299 16:50:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:05.299 16:50:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.299 16:50:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.299 16:50:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.299 16:50:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.299 Fri Jul 12 02:50:04 PM UTC 2024 00:01:05.299 16:50:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.299 v24.09-pre-204-gd4b4edb37 00:01:05.299 16:50:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.299 16:50:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.299 16:50:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.299 16:50:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:05.299 16:50:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:05.299 16:50:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.299 ************************************ 00:01:05.299 START TEST ubsan 00:01:05.299 ************************************ 00:01:05.299 16:50:04 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:05.299 using ubsan 00:01:05.299 00:01:05.299 real 0m0.000s 00:01:05.299 user 0m0.000s 00:01:05.299 sys 0m0.000s 00:01:05.299 16:50:04 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:05.299 16:50:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.299 ************************************ 00:01:05.299 END TEST ubsan 00:01:05.299 ************************************ 00:01:05.299 16:50:04 -- common/autotest_common.sh@1142 -- $ return 0 00:01:05.299 16:50:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.300 16:50:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.300 16:50:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.300 16:50:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.300 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.300 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.865 Using 'verbs' RDMA provider 00:01:16.428 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.461 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.461 Creating mk/config.mk...done. 00:01:26.461 Creating mk/cc.flags.mk...done. 00:01:26.461 Type 'make' to build. 00:01:26.461 16:50:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:26.461 16:50:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.461 16:50:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.461 16:50:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.461 ************************************ 00:01:26.461 START TEST make 00:01:26.461 ************************************ 00:01:26.461 16:50:25 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:26.461 make[1]: Nothing to be done for 'all'. 00:01:27.848 The Meson build system 00:01:27.848 Version: 1.3.1 00:01:27.848 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:27.848 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.848 Build type: native build 00:01:27.848 Project name: libvfio-user 00:01:27.848 Project version: 0.0.1 00:01:27.848 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:27.848 C linker for the host machine: cc ld.bfd 2.39-16 00:01:27.848 Host machine cpu family: x86_64 00:01:27.848 Host machine cpu: x86_64 00:01:27.848 Run-time dependency threads found: YES 00:01:27.848 Library dl found: YES 00:01:27.848 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.848 Run-time dependency json-c found: YES 0.17 00:01:27.848 Run-time dependency cmocka found: YES 1.1.7 00:01:27.848 Program pytest-3 found: NO 00:01:27.848 Program flake8 found: NO 00:01:27.848 Program misspell-fixer found: NO 00:01:27.848 Program restructuredtext-lint found: NO 00:01:27.848 Program valgrind found: YES (/usr/bin/valgrind) 00:01:27.848 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.848 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.848 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.848 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.848 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:27.848 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:27.848 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.848 Build targets in project: 8 00:01:27.848 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:27.848 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:27.848 00:01:27.848 libvfio-user 0.0.1 00:01:27.848 00:01:27.848 User defined options 00:01:27.848 buildtype : debug 00:01:27.848 default_library: shared 00:01:27.848 libdir : /usr/local/lib 00:01:27.848 00:01:27.848 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.795 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.795 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:28.795 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:28.795 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:28.795 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:28.795 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:28.795 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:28.795 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:28.795 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:28.795 [9/37] Compiling C object samples/null.p/null.c.o 00:01:28.795 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:28.795 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:28.795 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:28.795 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:28.795 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:28.795 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:28.795 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:28.795 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:29.056 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:29.057 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:29.057 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:29.057 [21/37] Compiling C object samples/server.p/server.c.o 00:01:29.057 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:29.057 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:29.057 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:29.057 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:29.057 [26/37] Compiling C object samples/client.p/client.c.o 00:01:29.057 [27/37] Linking target samples/client 00:01:29.057 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:29.057 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:29.057 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:29.320 [31/37] Linking target test/unit_tests 00:01:29.320 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:29.320 [33/37] Linking target samples/null 00:01:29.320 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:29.320 [35/37] Linking target samples/server 00:01:29.320 [36/37] Linking target samples/lspci 00:01:29.320 [37/37] Linking target samples/gpio-pci-idio-16 00:01:29.580 INFO: autodetecting backend as ninja 00:01:29.580 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.580 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.157 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.157 ninja: no work to do. 00:01:35.434 The Meson build system 00:01:35.434 Version: 1.3.1 00:01:35.434 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:35.434 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:35.434 Build type: native build 00:01:35.434 Program cat found: YES (/usr/bin/cat) 00:01:35.434 Project name: DPDK 00:01:35.435 Project version: 24.03.0 00:01:35.435 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:35.435 C linker for the host machine: cc ld.bfd 2.39-16 00:01:35.435 Host machine cpu family: x86_64 00:01:35.435 Host machine cpu: x86_64 00:01:35.435 Message: ## Building in Developer Mode ## 00:01:35.435 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.435 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:35.435 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.435 Program python3 found: YES (/usr/bin/python3) 00:01:35.435 Program cat found: YES (/usr/bin/cat) 00:01:35.435 Compiler for C supports arguments -march=native: YES 00:01:35.435 Checking for size of "void *" : 8 00:01:35.435 Checking for size of "void *" : 8 (cached) 00:01:35.435 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:35.435 Library m found: YES 00:01:35.435 Library numa found: YES 00:01:35.435 Has header "numaif.h" : YES 00:01:35.435 Library fdt found: NO 00:01:35.435 Library execinfo found: NO 00:01:35.435 Has header "execinfo.h" : YES 00:01:35.435 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.435 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.435 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.435 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.435 Run-time dependency openssl found: YES 3.0.9 00:01:35.435 Run-time dependency libpcap found: YES 1.10.4 00:01:35.435 Has header "pcap.h" with dependency libpcap: YES 00:01:35.435 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.435 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.435 Compiler for C supports arguments -Wformat: YES 00:01:35.435 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.435 Compiler for C supports arguments -Wformat-security: NO 00:01:35.435 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.435 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.435 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.435 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.435 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.435 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.435 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.435 Compiler for C supports arguments -Wundef: YES 00:01:35.435 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.435 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.435 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.435 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.435 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.435 Program objdump found: YES (/usr/bin/objdump) 00:01:35.435 Compiler for C supports arguments -mavx512f: YES 00:01:35.435 Checking if "AVX512 checking" compiles: YES 00:01:35.435 Fetching value of define "__SSE4_2__" : 1 00:01:35.435 Fetching value of define "__AES__" : 1 00:01:35.435 Fetching value of define "__AVX__" : 1 00:01:35.435 Fetching value of define "__AVX2__" : (undefined) 00:01:35.435 Fetching value of define "__AVX512BW__" : (undefined) 00:01:35.435 Fetching value of define "__AVX512CD__" : (undefined) 00:01:35.435 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:35.435 Fetching value of define "__AVX512F__" : (undefined) 00:01:35.435 Fetching value of define "__AVX512VL__" : (undefined) 00:01:35.435 Fetching value of define "__PCLMUL__" : 1 00:01:35.435 Fetching value of define "__RDRND__" : 1 00:01:35.435 Fetching value of define "__RDSEED__" : (undefined) 00:01:35.435 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.435 Fetching value of define "__znver1__" : (undefined) 00:01:35.435 Fetching value of define "__znver2__" : (undefined) 00:01:35.435 Fetching value of define "__znver3__" : (undefined) 00:01:35.435 Fetching value of define "__znver4__" : (undefined) 00:01:35.435 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.435 Message: lib/log: Defining dependency "log" 00:01:35.435 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.435 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.435 Checking for function "getentropy" : NO 00:01:35.435 Message: lib/eal: Defining dependency "eal" 00:01:35.435 Message: lib/ring: Defining dependency "ring" 00:01:35.435 Message: lib/rcu: Defining dependency "rcu" 00:01:35.435 Message: lib/mempool: Defining dependency "mempool" 00:01:35.435 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.435 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.435 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.435 Compiler for C supports arguments -mpclmul: YES 00:01:35.435 Compiler for C supports arguments -maes: YES 00:01:35.435 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.435 Compiler for C supports arguments -mavx512bw: YES 00:01:35.435 Compiler for C supports arguments -mavx512dq: YES 00:01:35.435 Compiler for C supports arguments -mavx512vl: YES 00:01:35.435 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.435 Compiler for C supports arguments -mavx2: YES 00:01:35.435 Compiler for C supports arguments -mavx: YES 00:01:35.435 Message: lib/net: Defining dependency "net" 00:01:35.435 Message: lib/meter: Defining dependency "meter" 00:01:35.435 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.435 Message: lib/pci: Defining dependency "pci" 00:01:35.435 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.435 Message: lib/hash: Defining dependency "hash" 00:01:35.435 Message: lib/timer: Defining dependency "timer" 00:01:35.435 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.435 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.435 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.435 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.435 Message: lib/power: Defining dependency "power" 00:01:35.435 Message: lib/reorder: Defining dependency "reorder" 00:01:35.435 Message: lib/security: Defining dependency "security" 00:01:35.435 Has header "linux/userfaultfd.h" : YES 00:01:35.435 Has header "linux/vduse.h" : YES 00:01:35.435 Message: lib/vhost: Defining dependency "vhost" 00:01:35.435 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.435 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.435 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.435 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.435 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:35.435 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:35.435 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:35.435 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:35.435 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:35.435 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:35.435 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.435 Configuring doxy-api-html.conf using configuration 00:01:35.435 Configuring doxy-api-man.conf using configuration 00:01:35.435 Program mandb found: YES (/usr/bin/mandb) 00:01:35.435 Program sphinx-build found: NO 00:01:35.435 Configuring rte_build_config.h using configuration 00:01:35.435 Message: 00:01:35.435 ================= 00:01:35.435 Applications Enabled 00:01:35.435 ================= 00:01:35.435 00:01:35.435 apps: 00:01:35.435 00:01:35.435 00:01:35.435 Message: 00:01:35.435 ================= 00:01:35.435 Libraries Enabled 00:01:35.435 ================= 00:01:35.435 00:01:35.435 libs: 00:01:35.435 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.435 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:35.435 cryptodev, dmadev, power, reorder, security, vhost, 00:01:35.435 00:01:35.435 Message: 00:01:35.435 =============== 00:01:35.435 Drivers Enabled 00:01:35.435 =============== 00:01:35.435 00:01:35.435 common: 00:01:35.435 00:01:35.435 bus: 00:01:35.435 pci, vdev, 00:01:35.435 mempool: 00:01:35.435 ring, 00:01:35.435 dma: 00:01:35.435 00:01:35.435 net: 00:01:35.435 00:01:35.435 crypto: 00:01:35.435 00:01:35.435 compress: 00:01:35.435 00:01:35.435 vdpa: 00:01:35.435 00:01:35.435 00:01:35.435 Message: 00:01:35.435 ================= 00:01:35.435 Content Skipped 00:01:35.435 ================= 00:01:35.435 00:01:35.435 apps: 00:01:35.435 dumpcap: explicitly disabled via build config 00:01:35.435 graph: explicitly disabled via build config 00:01:35.435 pdump: explicitly disabled via build config 00:01:35.435 proc-info: explicitly disabled via build config 00:01:35.435 test-acl: explicitly disabled via build config 00:01:35.435 test-bbdev: explicitly disabled via build config 00:01:35.435 test-cmdline: explicitly disabled via build config 00:01:35.435 test-compress-perf: explicitly disabled via build config 00:01:35.435 test-crypto-perf: explicitly disabled via build config 00:01:35.435 test-dma-perf: explicitly disabled via build config 00:01:35.435 test-eventdev: explicitly disabled via build config 00:01:35.435 test-fib: explicitly disabled via build config 00:01:35.435 test-flow-perf: explicitly disabled via build config 00:01:35.435 test-gpudev: explicitly disabled via build config 00:01:35.435 test-mldev: explicitly disabled via build config 00:01:35.435 test-pipeline: explicitly disabled via build config 00:01:35.435 test-pmd: explicitly disabled via build config 00:01:35.435 test-regex: explicitly disabled via build config 00:01:35.435 test-sad: explicitly disabled via build config 00:01:35.435 test-security-perf: explicitly disabled via build config 00:01:35.435 00:01:35.435 libs: 00:01:35.435 argparse: explicitly disabled via build config 00:01:35.435 metrics: explicitly disabled via build config 00:01:35.435 acl: explicitly disabled via build config 00:01:35.435 bbdev: explicitly disabled via build config 00:01:35.435 bitratestats: explicitly disabled via build config 00:01:35.435 bpf: explicitly disabled via build config 00:01:35.435 cfgfile: explicitly disabled via build config 00:01:35.435 distributor: explicitly disabled via build config 00:01:35.435 efd: explicitly disabled via build config 00:01:35.435 eventdev: explicitly disabled via build config 00:01:35.435 dispatcher: explicitly disabled via build config 00:01:35.435 gpudev: explicitly disabled via build config 00:01:35.435 gro: explicitly disabled via build config 00:01:35.435 gso: explicitly disabled via build config 00:01:35.435 ip_frag: explicitly disabled via build config 00:01:35.435 jobstats: explicitly disabled via build config 00:01:35.435 latencystats: explicitly disabled via build config 00:01:35.435 lpm: explicitly disabled via build config 00:01:35.436 member: explicitly disabled via build config 00:01:35.436 pcapng: explicitly disabled via build config 00:01:35.436 rawdev: explicitly disabled via build config 00:01:35.436 regexdev: explicitly disabled via build config 00:01:35.436 mldev: explicitly disabled via build config 00:01:35.436 rib: explicitly disabled via build config 00:01:35.436 sched: explicitly disabled via build config 00:01:35.436 stack: explicitly disabled via build config 00:01:35.436 ipsec: explicitly disabled via build config 00:01:35.436 pdcp: explicitly disabled via build config 00:01:35.436 fib: explicitly disabled via build config 00:01:35.436 port: explicitly disabled via build config 00:01:35.436 pdump: explicitly disabled via build config 00:01:35.436 table: explicitly disabled via build config 00:01:35.436 pipeline: explicitly disabled via build config 00:01:35.436 graph: explicitly disabled via build config 00:01:35.436 node: explicitly disabled via build config 00:01:35.436 00:01:35.436 drivers: 00:01:35.436 common/cpt: not in enabled drivers build config 00:01:35.436 common/dpaax: not in enabled drivers build config 00:01:35.436 common/iavf: not in enabled drivers build config 00:01:35.436 common/idpf: not in enabled drivers build config 00:01:35.436 common/ionic: not in enabled drivers build config 00:01:35.436 common/mvep: not in enabled drivers build config 00:01:35.436 common/octeontx: not in enabled drivers build config 00:01:35.436 bus/auxiliary: not in enabled drivers build config 00:01:35.436 bus/cdx: not in enabled drivers build config 00:01:35.436 bus/dpaa: not in enabled drivers build config 00:01:35.436 bus/fslmc: not in enabled drivers build config 00:01:35.436 bus/ifpga: not in enabled drivers build config 00:01:35.436 bus/platform: not in enabled drivers build config 00:01:35.436 bus/uacce: not in enabled drivers build config 00:01:35.436 bus/vmbus: not in enabled drivers build config 00:01:35.436 common/cnxk: not in enabled drivers build config 00:01:35.436 common/mlx5: not in enabled drivers build config 00:01:35.436 common/nfp: not in enabled drivers build config 00:01:35.436 common/nitrox: not in enabled drivers build config 00:01:35.436 common/qat: not in enabled drivers build config 00:01:35.436 common/sfc_efx: not in enabled drivers build config 00:01:35.436 mempool/bucket: not in enabled drivers build config 00:01:35.436 mempool/cnxk: not in enabled drivers build config 00:01:35.436 mempool/dpaa: not in enabled drivers build config 00:01:35.436 mempool/dpaa2: not in enabled drivers build config 00:01:35.436 mempool/octeontx: not in enabled drivers build config 00:01:35.436 mempool/stack: not in enabled drivers build config 00:01:35.436 dma/cnxk: not in enabled drivers build config 00:01:35.436 dma/dpaa: not in enabled drivers build config 00:01:35.436 dma/dpaa2: not in enabled drivers build config 00:01:35.436 dma/hisilicon: not in enabled drivers build config 00:01:35.436 dma/idxd: not in enabled drivers build config 00:01:35.436 dma/ioat: not in enabled drivers build config 00:01:35.436 dma/skeleton: not in enabled drivers build config 00:01:35.436 net/af_packet: not in enabled drivers build config 00:01:35.436 net/af_xdp: not in enabled drivers build config 00:01:35.436 net/ark: not in enabled drivers build config 00:01:35.436 net/atlantic: not in enabled drivers build config 00:01:35.436 net/avp: not in enabled drivers build config 00:01:35.436 net/axgbe: not in enabled drivers build config 00:01:35.436 net/bnx2x: not in enabled drivers build config 00:01:35.436 net/bnxt: not in enabled drivers build config 00:01:35.436 net/bonding: not in enabled drivers build config 00:01:35.436 net/cnxk: not in enabled drivers build config 00:01:35.436 net/cpfl: not in enabled drivers build config 00:01:35.436 net/cxgbe: not in enabled drivers build config 00:01:35.436 net/dpaa: not in enabled drivers build config 00:01:35.436 net/dpaa2: not in enabled drivers build config 00:01:35.436 net/e1000: not in enabled drivers build config 00:01:35.436 net/ena: not in enabled drivers build config 00:01:35.436 net/enetc: not in enabled drivers build config 00:01:35.436 net/enetfec: not in enabled drivers build config 00:01:35.436 net/enic: not in enabled drivers build config 00:01:35.436 net/failsafe: not in enabled drivers build config 00:01:35.436 net/fm10k: not in enabled drivers build config 00:01:35.436 net/gve: not in enabled drivers build config 00:01:35.436 net/hinic: not in enabled drivers build config 00:01:35.436 net/hns3: not in enabled drivers build config 00:01:35.436 net/i40e: not in enabled drivers build config 00:01:35.436 net/iavf: not in enabled drivers build config 00:01:35.436 net/ice: not in enabled drivers build config 00:01:35.436 net/idpf: not in enabled drivers build config 00:01:35.436 net/igc: not in enabled drivers build config 00:01:35.436 net/ionic: not in enabled drivers build config 00:01:35.436 net/ipn3ke: not in enabled drivers build config 00:01:35.436 net/ixgbe: not in enabled drivers build config 00:01:35.436 net/mana: not in enabled drivers build config 00:01:35.436 net/memif: not in enabled drivers build config 00:01:35.436 net/mlx4: not in enabled drivers build config 00:01:35.436 net/mlx5: not in enabled drivers build config 00:01:35.436 net/mvneta: not in enabled drivers build config 00:01:35.436 net/mvpp2: not in enabled drivers build config 00:01:35.436 net/netvsc: not in enabled drivers build config 00:01:35.436 net/nfb: not in enabled drivers build config 00:01:35.436 net/nfp: not in enabled drivers build config 00:01:35.436 net/ngbe: not in enabled drivers build config 00:01:35.436 net/null: not in enabled drivers build config 00:01:35.436 net/octeontx: not in enabled drivers build config 00:01:35.436 net/octeon_ep: not in enabled drivers build config 00:01:35.436 net/pcap: not in enabled drivers build config 00:01:35.436 net/pfe: not in enabled drivers build config 00:01:35.436 net/qede: not in enabled drivers build config 00:01:35.436 net/ring: not in enabled drivers build config 00:01:35.436 net/sfc: not in enabled drivers build config 00:01:35.436 net/softnic: not in enabled drivers build config 00:01:35.436 net/tap: not in enabled drivers build config 00:01:35.436 net/thunderx: not in enabled drivers build config 00:01:35.436 net/txgbe: not in enabled drivers build config 00:01:35.436 net/vdev_netvsc: not in enabled drivers build config 00:01:35.436 net/vhost: not in enabled drivers build config 00:01:35.436 net/virtio: not in enabled drivers build config 00:01:35.436 net/vmxnet3: not in enabled drivers build config 00:01:35.436 raw/*: missing internal dependency, "rawdev" 00:01:35.436 crypto/armv8: not in enabled drivers build config 00:01:35.436 crypto/bcmfs: not in enabled drivers build config 00:01:35.436 crypto/caam_jr: not in enabled drivers build config 00:01:35.436 crypto/ccp: not in enabled drivers build config 00:01:35.436 crypto/cnxk: not in enabled drivers build config 00:01:35.436 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.436 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.436 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.436 crypto/mlx5: not in enabled drivers build config 00:01:35.436 crypto/mvsam: not in enabled drivers build config 00:01:35.436 crypto/nitrox: not in enabled drivers build config 00:01:35.436 crypto/null: not in enabled drivers build config 00:01:35.436 crypto/octeontx: not in enabled drivers build config 00:01:35.436 crypto/openssl: not in enabled drivers build config 00:01:35.436 crypto/scheduler: not in enabled drivers build config 00:01:35.436 crypto/uadk: not in enabled drivers build config 00:01:35.436 crypto/virtio: not in enabled drivers build config 00:01:35.436 compress/isal: not in enabled drivers build config 00:01:35.436 compress/mlx5: not in enabled drivers build config 00:01:35.436 compress/nitrox: not in enabled drivers build config 00:01:35.436 compress/octeontx: not in enabled drivers build config 00:01:35.436 compress/zlib: not in enabled drivers build config 00:01:35.436 regex/*: missing internal dependency, "regexdev" 00:01:35.436 ml/*: missing internal dependency, "mldev" 00:01:35.436 vdpa/ifc: not in enabled drivers build config 00:01:35.436 vdpa/mlx5: not in enabled drivers build config 00:01:35.436 vdpa/nfp: not in enabled drivers build config 00:01:35.436 vdpa/sfc: not in enabled drivers build config 00:01:35.436 event/*: missing internal dependency, "eventdev" 00:01:35.436 baseband/*: missing internal dependency, "bbdev" 00:01:35.436 gpu/*: missing internal dependency, "gpudev" 00:01:35.436 00:01:35.436 00:01:35.436 Build targets in project: 85 00:01:35.436 00:01:35.436 DPDK 24.03.0 00:01:35.436 00:01:35.436 User defined options 00:01:35.436 buildtype : debug 00:01:35.436 default_library : shared 00:01:35.436 libdir : lib 00:01:35.436 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.436 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:35.436 c_link_args : 00:01:35.436 cpu_instruction_set: native 00:01:35.436 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:35.436 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:35.436 enable_docs : false 00:01:35.436 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:35.436 enable_kmods : false 00:01:35.436 max_lcores : 128 00:01:35.436 tests : false 00:01:35.436 00:01:35.436 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.436 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:35.436 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.436 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.436 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.436 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.436 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.436 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.436 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.436 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.436 [9/268] Linking static target lib/librte_kvargs.a 00:01:35.436 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.699 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.699 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.699 [13/268] Linking static target lib/librte_log.a 00:01:35.699 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.699 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.699 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:36.271 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.271 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.271 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:36.271 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.271 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.271 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:36.271 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.271 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:36.530 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.530 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.530 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.530 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.530 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.530 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.530 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.530 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.530 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.530 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.530 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.530 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.530 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.530 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.530 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.530 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.530 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.530 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.530 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.530 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.530 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.530 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.530 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.530 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.530 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.530 [50/268] Linking static target lib/librte_telemetry.a 00:01:36.530 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:36.530 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:36.530 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:36.530 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:36.530 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.530 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.530 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.530 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.530 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.530 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.530 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.794 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:36.794 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.794 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.794 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.794 [66/268] Linking target lib/librte_log.so.24.1 00:01:37.055 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.055 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.055 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.055 [70/268] Linking static target lib/librte_pci.a 00:01:37.314 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.314 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.314 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:37.314 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.314 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.314 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.314 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.314 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.314 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.314 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.314 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.314 [82/268] Linking target lib/librte_kvargs.so.24.1 00:01:37.314 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.314 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.314 [85/268] Linking static target lib/librte_ring.a 00:01:37.314 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.314 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.314 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.314 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.578 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.578 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.578 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.578 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.578 [94/268] Linking static target lib/librte_meter.a 00:01:37.578 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.578 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.578 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.578 [98/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.578 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.578 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.578 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.578 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.578 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.578 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.578 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.578 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.578 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.578 [108/268] Linking target lib/librte_telemetry.so.24.1 00:01:37.578 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.578 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.578 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.578 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.578 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.578 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.578 [115/268] Linking static target lib/librte_eal.a 00:01:37.578 [116/268] Linking static target lib/librte_mempool.a 00:01:37.578 [117/268] Linking static target lib/librte_rcu.a 00:01:37.578 [118/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.578 [119/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:37.578 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.578 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.922 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.922 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.922 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.922 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.922 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.922 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.922 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:37.922 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.922 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.922 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.922 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.922 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.922 [134/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.923 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.923 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.181 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.182 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.182 [139/268] Linking static target lib/librte_net.a 00:01:38.182 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.182 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.182 [142/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.446 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.446 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.446 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.446 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.446 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.446 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.446 [149/268] Linking static target lib/librte_cmdline.a 00:01:38.446 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.446 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.446 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.446 [153/268] Linking static target lib/librte_timer.a 00:01:38.446 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.705 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.705 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.705 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.705 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.705 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.705 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.705 [161/268] Linking static target lib/librte_dmadev.a 00:01:38.705 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.705 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.705 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.705 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.705 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.705 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.962 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.962 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.962 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.962 [171/268] Linking static target lib/librte_compressdev.a 00:01:38.962 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.962 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.962 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.962 [175/268] Linking static target lib/librte_power.a 00:01:38.962 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.962 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.962 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.962 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.962 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.962 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.962 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.218 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.218 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.218 [185/268] Linking static target lib/librte_hash.a 00:01:39.218 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.218 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.218 [188/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.218 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.218 [190/268] Linking static target lib/librte_mbuf.a 00:01:39.218 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.218 [192/268] Linking static target lib/librte_reorder.a 00:01:39.218 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.218 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.219 [195/268] Linking static target lib/librte_security.a 00:01:39.219 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.219 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.219 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.219 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.219 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.476 [201/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.476 [202/268] Linking static target drivers/librte_bus_vdev.a 00:01:39.476 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.476 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.476 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.476 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.476 [207/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.476 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.476 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.476 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.476 [211/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.476 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.476 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.476 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.476 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.476 [216/268] Linking static target drivers/librte_mempool_ring.a 00:01:39.476 [217/268] Linking static target drivers/librte_bus_pci.a 00:01:39.733 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.733 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.733 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.733 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.733 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.733 [223/268] Linking static target lib/librte_cryptodev.a 00:01:39.990 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.990 [225/268] Linking static target lib/librte_ethdev.a 00:01:39.990 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.920 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.851 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:44.372 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.372 [230/268] Linking target lib/librte_eal.so.24.1 00:01:44.372 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.372 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:44.372 [233/268] Linking target lib/librte_ring.so.24.1 00:01:44.372 [234/268] Linking target lib/librte_timer.so.24.1 00:01:44.372 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:44.372 [236/268] Linking target lib/librte_meter.so.24.1 00:01:44.372 [237/268] Linking target lib/librte_pci.so.24.1 00:01:44.372 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:44.372 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:44.372 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:44.372 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:44.372 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:44.372 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:44.372 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:44.372 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:44.372 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:44.372 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:44.372 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:44.629 [249/268] Linking target lib/librte_mbuf.so.24.1 00:01:44.629 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:44.629 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:44.629 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:44.629 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:44.629 [254/268] Linking target lib/librte_net.so.24.1 00:01:44.629 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:44.887 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.887 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.887 [258/268] Linking target lib/librte_hash.so.24.1 00:01:44.887 [259/268] Linking target lib/librte_security.so.24.1 00:01:44.887 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:44.887 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:44.887 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.887 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:45.146 [264/268] Linking target lib/librte_power.so.24.1 00:01:47.681 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.681 [266/268] Linking static target lib/librte_vhost.a 00:01:48.248 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.506 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:48.506 INFO: autodetecting backend as ninja 00:01:48.506 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:49.442 CC lib/ut_mock/mock.o 00:01:49.442 CC lib/ut/ut.o 00:01:49.442 CC lib/log/log.o 00:01:49.442 CC lib/log/log_flags.o 00:01:49.442 CC lib/log/log_deprecated.o 00:01:49.442 LIB libspdk_ut.a 00:01:49.443 LIB libspdk_ut_mock.a 00:01:49.443 LIB libspdk_log.a 00:01:49.701 SO libspdk_ut_mock.so.6.0 00:01:49.701 SO libspdk_ut.so.2.0 00:01:49.701 SO libspdk_log.so.7.0 00:01:49.701 SYMLINK libspdk_ut_mock.so 00:01:49.701 SYMLINK libspdk_ut.so 00:01:49.701 SYMLINK libspdk_log.so 00:01:49.701 CXX lib/trace_parser/trace.o 00:01:49.701 CC lib/dma/dma.o 00:01:49.701 CC lib/ioat/ioat.o 00:01:49.701 CC lib/util/base64.o 00:01:49.701 CC lib/util/bit_array.o 00:01:49.701 CC lib/util/cpuset.o 00:01:49.701 CC lib/util/crc16.o 00:01:49.701 CC lib/util/crc32.o 00:01:49.701 CC lib/util/crc32c.o 00:01:49.701 CC lib/util/crc32_ieee.o 00:01:49.701 CC lib/util/crc64.o 00:01:49.701 CC lib/util/dif.o 00:01:49.701 CC lib/util/fd.o 00:01:49.701 CC lib/util/file.o 00:01:49.701 CC lib/util/hexlify.o 00:01:49.701 CC lib/util/iov.o 00:01:49.701 CC lib/util/math.o 00:01:49.701 CC lib/util/pipe.o 00:01:49.701 CC lib/util/strerror_tls.o 00:01:49.701 CC lib/util/string.o 00:01:49.701 CC lib/util/uuid.o 00:01:49.701 CC lib/util/fd_group.o 00:01:49.701 CC lib/util/xor.o 00:01:49.701 CC lib/util/zipf.o 00:01:49.959 CC lib/vfio_user/host/vfio_user_pci.o 00:01:49.959 CC lib/vfio_user/host/vfio_user.o 00:01:49.959 LIB libspdk_dma.a 00:01:49.959 SO libspdk_dma.so.4.0 00:01:50.217 SYMLINK libspdk_dma.so 00:01:50.217 LIB libspdk_ioat.a 00:01:50.217 SO libspdk_ioat.so.7.0 00:01:50.217 SYMLINK libspdk_ioat.so 00:01:50.217 LIB libspdk_vfio_user.a 00:01:50.217 SO libspdk_vfio_user.so.5.0 00:01:50.217 SYMLINK libspdk_vfio_user.so 00:01:50.475 LIB libspdk_util.a 00:01:50.475 SO libspdk_util.so.9.1 00:01:50.475 SYMLINK libspdk_util.so 00:01:50.733 CC lib/idxd/idxd.o 00:01:50.733 CC lib/rdma_utils/rdma_utils.o 00:01:50.733 CC lib/vmd/vmd.o 00:01:50.733 CC lib/conf/conf.o 00:01:50.733 CC lib/json/json_parse.o 00:01:50.733 CC lib/env_dpdk/env.o 00:01:50.733 CC lib/rdma_provider/common.o 00:01:50.733 CC lib/idxd/idxd_user.o 00:01:50.733 CC lib/json/json_util.o 00:01:50.733 CC lib/env_dpdk/memory.o 00:01:50.733 CC lib/vmd/led.o 00:01:50.733 CC lib/idxd/idxd_kernel.o 00:01:50.733 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:50.733 CC lib/json/json_write.o 00:01:50.733 CC lib/env_dpdk/pci.o 00:01:50.733 CC lib/env_dpdk/init.o 00:01:50.733 CC lib/env_dpdk/threads.o 00:01:50.733 CC lib/env_dpdk/pci_ioat.o 00:01:50.733 CC lib/env_dpdk/pci_virtio.o 00:01:50.733 CC lib/env_dpdk/pci_vmd.o 00:01:50.733 CC lib/env_dpdk/pci_idxd.o 00:01:50.733 CC lib/env_dpdk/pci_event.o 00:01:50.733 CC lib/env_dpdk/pci_dpdk.o 00:01:50.733 CC lib/env_dpdk/sigbus_handler.o 00:01:50.733 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:50.733 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:50.733 LIB libspdk_trace_parser.a 00:01:50.733 SO libspdk_trace_parser.so.5.0 00:01:50.991 SYMLINK libspdk_trace_parser.so 00:01:50.991 LIB libspdk_conf.a 00:01:50.991 SO libspdk_conf.so.6.0 00:01:50.991 LIB libspdk_rdma_provider.a 00:01:50.991 LIB libspdk_rdma_utils.a 00:01:50.991 SO libspdk_rdma_provider.so.6.0 00:01:50.991 SO libspdk_rdma_utils.so.1.0 00:01:50.991 SYMLINK libspdk_conf.so 00:01:51.249 SYMLINK libspdk_rdma_provider.so 00:01:51.249 SYMLINK libspdk_rdma_utils.so 00:01:51.249 LIB libspdk_json.a 00:01:51.249 SO libspdk_json.so.6.0 00:01:51.249 SYMLINK libspdk_json.so 00:01:51.249 LIB libspdk_idxd.a 00:01:51.507 SO libspdk_idxd.so.12.0 00:01:51.507 CC lib/jsonrpc/jsonrpc_server.o 00:01:51.507 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:51.507 CC lib/jsonrpc/jsonrpc_client.o 00:01:51.507 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:51.507 LIB libspdk_vmd.a 00:01:51.507 SYMLINK libspdk_idxd.so 00:01:51.507 SO libspdk_vmd.so.6.0 00:01:51.507 SYMLINK libspdk_vmd.so 00:01:51.765 LIB libspdk_jsonrpc.a 00:01:51.765 SO libspdk_jsonrpc.so.6.0 00:01:51.765 SYMLINK libspdk_jsonrpc.so 00:01:52.022 CC lib/rpc/rpc.o 00:01:52.023 LIB libspdk_rpc.a 00:01:52.281 SO libspdk_rpc.so.6.0 00:01:52.281 SYMLINK libspdk_rpc.so 00:01:52.281 CC lib/trace/trace.o 00:01:52.281 CC lib/trace/trace_flags.o 00:01:52.281 CC lib/trace/trace_rpc.o 00:01:52.281 CC lib/keyring/keyring.o 00:01:52.281 CC lib/keyring/keyring_rpc.o 00:01:52.281 CC lib/notify/notify.o 00:01:52.281 CC lib/notify/notify_rpc.o 00:01:52.539 LIB libspdk_notify.a 00:01:52.539 SO libspdk_notify.so.6.0 00:01:52.539 LIB libspdk_keyring.a 00:01:52.539 SYMLINK libspdk_notify.so 00:01:52.539 LIB libspdk_trace.a 00:01:52.539 SO libspdk_keyring.so.1.0 00:01:52.797 SO libspdk_trace.so.10.0 00:01:52.797 SYMLINK libspdk_keyring.so 00:01:52.797 SYMLINK libspdk_trace.so 00:01:52.797 LIB libspdk_env_dpdk.a 00:01:52.797 SO libspdk_env_dpdk.so.14.1 00:01:52.797 CC lib/thread/thread.o 00:01:52.797 CC lib/sock/sock.o 00:01:52.797 CC lib/sock/sock_rpc.o 00:01:52.797 CC lib/thread/iobuf.o 00:01:53.055 SYMLINK libspdk_env_dpdk.so 00:01:53.312 LIB libspdk_sock.a 00:01:53.312 SO libspdk_sock.so.10.0 00:01:53.312 SYMLINK libspdk_sock.so 00:01:53.569 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:53.569 CC lib/nvme/nvme_ctrlr.o 00:01:53.569 CC lib/nvme/nvme_fabric.o 00:01:53.569 CC lib/nvme/nvme_ns_cmd.o 00:01:53.569 CC lib/nvme/nvme_ns.o 00:01:53.569 CC lib/nvme/nvme_pcie_common.o 00:01:53.569 CC lib/nvme/nvme_pcie.o 00:01:53.569 CC lib/nvme/nvme_qpair.o 00:01:53.569 CC lib/nvme/nvme.o 00:01:53.569 CC lib/nvme/nvme_quirks.o 00:01:53.569 CC lib/nvme/nvme_transport.o 00:01:53.569 CC lib/nvme/nvme_discovery.o 00:01:53.569 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:53.569 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:53.569 CC lib/nvme/nvme_tcp.o 00:01:53.569 CC lib/nvme/nvme_opal.o 00:01:53.569 CC lib/nvme/nvme_io_msg.o 00:01:53.569 CC lib/nvme/nvme_poll_group.o 00:01:53.569 CC lib/nvme/nvme_zns.o 00:01:53.569 CC lib/nvme/nvme_stubs.o 00:01:53.569 CC lib/nvme/nvme_auth.o 00:01:53.569 CC lib/nvme/nvme_cuse.o 00:01:53.569 CC lib/nvme/nvme_vfio_user.o 00:01:53.569 CC lib/nvme/nvme_rdma.o 00:01:54.503 LIB libspdk_thread.a 00:01:54.503 SO libspdk_thread.so.10.1 00:01:54.503 SYMLINK libspdk_thread.so 00:01:54.761 CC lib/blob/blobstore.o 00:01:54.761 CC lib/accel/accel.o 00:01:54.761 CC lib/init/json_config.o 00:01:54.761 CC lib/virtio/virtio.o 00:01:54.761 CC lib/vfu_tgt/tgt_endpoint.o 00:01:54.761 CC lib/accel/accel_rpc.o 00:01:54.761 CC lib/init/subsystem.o 00:01:54.761 CC lib/blob/request.o 00:01:54.761 CC lib/virtio/virtio_vhost_user.o 00:01:54.761 CC lib/vfu_tgt/tgt_rpc.o 00:01:54.761 CC lib/accel/accel_sw.o 00:01:54.761 CC lib/init/subsystem_rpc.o 00:01:54.761 CC lib/virtio/virtio_vfio_user.o 00:01:54.761 CC lib/blob/zeroes.o 00:01:54.761 CC lib/init/rpc.o 00:01:54.761 CC lib/virtio/virtio_pci.o 00:01:54.761 CC lib/blob/blob_bs_dev.o 00:01:55.019 LIB libspdk_init.a 00:01:55.019 SO libspdk_init.so.5.0 00:01:55.019 LIB libspdk_virtio.a 00:01:55.019 LIB libspdk_vfu_tgt.a 00:01:55.019 SYMLINK libspdk_init.so 00:01:55.019 SO libspdk_vfu_tgt.so.3.0 00:01:55.019 SO libspdk_virtio.so.7.0 00:01:55.277 SYMLINK libspdk_vfu_tgt.so 00:01:55.277 SYMLINK libspdk_virtio.so 00:01:55.277 CC lib/event/app.o 00:01:55.277 CC lib/event/reactor.o 00:01:55.277 CC lib/event/log_rpc.o 00:01:55.277 CC lib/event/app_rpc.o 00:01:55.277 CC lib/event/scheduler_static.o 00:01:55.535 LIB libspdk_event.a 00:01:55.793 SO libspdk_event.so.14.0 00:01:55.793 LIB libspdk_accel.a 00:01:55.793 SYMLINK libspdk_event.so 00:01:55.793 SO libspdk_accel.so.15.1 00:01:55.793 SYMLINK libspdk_accel.so 00:01:55.793 LIB libspdk_nvme.a 00:01:56.051 SO libspdk_nvme.so.13.1 00:01:56.051 CC lib/bdev/bdev.o 00:01:56.051 CC lib/bdev/bdev_rpc.o 00:01:56.051 CC lib/bdev/bdev_zone.o 00:01:56.051 CC lib/bdev/part.o 00:01:56.051 CC lib/bdev/scsi_nvme.o 00:01:56.310 SYMLINK libspdk_nvme.so 00:01:57.684 LIB libspdk_blob.a 00:01:57.684 SO libspdk_blob.so.11.0 00:01:57.684 SYMLINK libspdk_blob.so 00:01:57.942 CC lib/lvol/lvol.o 00:01:57.942 CC lib/blobfs/blobfs.o 00:01:57.942 CC lib/blobfs/tree.o 00:01:58.508 LIB libspdk_bdev.a 00:01:58.508 SO libspdk_bdev.so.15.1 00:01:58.776 SYMLINK libspdk_bdev.so 00:01:58.776 LIB libspdk_blobfs.a 00:01:58.776 CC lib/nvmf/ctrlr.o 00:01:58.776 CC lib/scsi/dev.o 00:01:58.776 CC lib/ublk/ublk.o 00:01:58.776 CC lib/ftl/ftl_core.o 00:01:58.776 CC lib/nbd/nbd.o 00:01:58.776 CC lib/nvmf/ctrlr_discovery.o 00:01:58.776 CC lib/scsi/lun.o 00:01:58.776 CC lib/ublk/ublk_rpc.o 00:01:58.776 CC lib/ftl/ftl_init.o 00:01:58.776 CC lib/nvmf/ctrlr_bdev.o 00:01:58.776 CC lib/nbd/nbd_rpc.o 00:01:58.776 CC lib/scsi/port.o 00:01:58.776 CC lib/ftl/ftl_layout.o 00:01:58.776 CC lib/nvmf/subsystem.o 00:01:58.776 CC lib/scsi/scsi.o 00:01:58.776 CC lib/ftl/ftl_debug.o 00:01:58.776 CC lib/nvmf/nvmf.o 00:01:58.776 CC lib/ftl/ftl_io.o 00:01:58.776 CC lib/ftl/ftl_sb.o 00:01:58.776 CC lib/nvmf/nvmf_rpc.o 00:01:58.776 CC lib/scsi/scsi_bdev.o 00:01:58.776 CC lib/ftl/ftl_l2p.o 00:01:58.776 CC lib/ftl/ftl_l2p_flat.o 00:01:58.776 CC lib/scsi/scsi_pr.o 00:01:58.776 CC lib/nvmf/transport.o 00:01:58.776 CC lib/nvmf/tcp.o 00:01:58.776 CC lib/scsi/scsi_rpc.o 00:01:58.776 CC lib/ftl/ftl_nv_cache.o 00:01:58.776 CC lib/nvmf/stubs.o 00:01:58.776 CC lib/ftl/ftl_band.o 00:01:58.776 CC lib/scsi/task.o 00:01:58.776 CC lib/nvmf/mdns_server.o 00:01:58.776 CC lib/nvmf/vfio_user.o 00:01:58.776 CC lib/ftl/ftl_band_ops.o 00:01:58.776 CC lib/ftl/ftl_writer.o 00:01:58.776 CC lib/nvmf/rdma.o 00:01:58.776 CC lib/nvmf/auth.o 00:01:58.776 CC lib/ftl/ftl_rq.o 00:01:58.776 CC lib/ftl/ftl_reloc.o 00:01:58.776 CC lib/ftl/ftl_l2p_cache.o 00:01:58.776 CC lib/ftl/ftl_p2l.o 00:01:58.776 CC lib/ftl/mngt/ftl_mngt.o 00:01:58.776 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:58.776 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:58.776 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:58.776 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:58.776 SO libspdk_blobfs.so.10.0 00:01:58.776 LIB libspdk_lvol.a 00:01:59.035 SO libspdk_lvol.so.10.0 00:01:59.035 SYMLINK libspdk_blobfs.so 00:01:59.035 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.035 SYMLINK libspdk_lvol.so 00:01:59.035 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.295 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.295 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.295 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.295 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.295 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.296 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.296 CC lib/ftl/utils/ftl_conf.o 00:01:59.296 CC lib/ftl/utils/ftl_md.o 00:01:59.296 CC lib/ftl/utils/ftl_mempool.o 00:01:59.296 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.296 CC lib/ftl/utils/ftl_property.o 00:01:59.296 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.296 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.296 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.296 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.296 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.296 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.296 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:59.555 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.555 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.555 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.555 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.555 CC lib/ftl/base/ftl_base_dev.o 00:01:59.555 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.555 CC lib/ftl/ftl_trace.o 00:01:59.555 LIB libspdk_nbd.a 00:01:59.555 SO libspdk_nbd.so.7.0 00:01:59.813 SYMLINK libspdk_nbd.so 00:01:59.813 LIB libspdk_scsi.a 00:01:59.813 SO libspdk_scsi.so.9.0 00:01:59.813 SYMLINK libspdk_scsi.so 00:01:59.813 LIB libspdk_ublk.a 00:01:59.813 SO libspdk_ublk.so.3.0 00:02:00.071 SYMLINK libspdk_ublk.so 00:02:00.071 CC lib/iscsi/conn.o 00:02:00.071 CC lib/vhost/vhost.o 00:02:00.071 CC lib/vhost/vhost_rpc.o 00:02:00.071 CC lib/iscsi/init_grp.o 00:02:00.071 CC lib/vhost/vhost_scsi.o 00:02:00.071 CC lib/iscsi/iscsi.o 00:02:00.071 CC lib/vhost/vhost_blk.o 00:02:00.071 CC lib/iscsi/md5.o 00:02:00.071 CC lib/vhost/rte_vhost_user.o 00:02:00.071 CC lib/iscsi/param.o 00:02:00.071 CC lib/iscsi/portal_grp.o 00:02:00.071 CC lib/iscsi/tgt_node.o 00:02:00.071 CC lib/iscsi/iscsi_subsystem.o 00:02:00.071 CC lib/iscsi/iscsi_rpc.o 00:02:00.071 CC lib/iscsi/task.o 00:02:00.330 LIB libspdk_ftl.a 00:02:00.330 SO libspdk_ftl.so.9.0 00:02:00.895 SYMLINK libspdk_ftl.so 00:02:01.154 LIB libspdk_vhost.a 00:02:01.413 SO libspdk_vhost.so.8.0 00:02:01.413 LIB libspdk_nvmf.a 00:02:01.413 SO libspdk_nvmf.so.18.1 00:02:01.413 SYMLINK libspdk_vhost.so 00:02:01.413 LIB libspdk_iscsi.a 00:02:01.671 SO libspdk_iscsi.so.8.0 00:02:01.671 SYMLINK libspdk_nvmf.so 00:02:01.671 SYMLINK libspdk_iscsi.so 00:02:01.929 CC module/vfu_device/vfu_virtio.o 00:02:01.929 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.929 CC module/vfu_device/vfu_virtio_blk.o 00:02:01.929 CC module/vfu_device/vfu_virtio_scsi.o 00:02:01.929 CC module/vfu_device/vfu_virtio_rpc.o 00:02:01.929 CC module/keyring/file/keyring.o 00:02:01.929 CC module/keyring/file/keyring_rpc.o 00:02:01.929 CC module/accel/iaa/accel_iaa.o 00:02:01.929 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.929 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.929 CC module/accel/dsa/accel_dsa.o 00:02:01.929 CC module/keyring/linux/keyring.o 00:02:01.929 CC module/sock/posix/posix.o 00:02:01.929 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.929 CC module/accel/ioat/accel_ioat.o 00:02:01.929 CC module/accel/error/accel_error.o 00:02:01.929 CC module/blob/bdev/blob_bdev.o 00:02:01.929 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.929 CC module/accel/error/accel_error_rpc.o 00:02:01.929 CC module/keyring/linux/keyring_rpc.o 00:02:01.929 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.929 CC module/accel/dsa/accel_dsa_rpc.o 00:02:02.187 LIB libspdk_env_dpdk_rpc.a 00:02:02.187 SO libspdk_env_dpdk_rpc.so.6.0 00:02:02.187 SYMLINK libspdk_env_dpdk_rpc.so 00:02:02.187 LIB libspdk_keyring_linux.a 00:02:02.187 LIB libspdk_keyring_file.a 00:02:02.187 LIB libspdk_scheduler_dpdk_governor.a 00:02:02.187 LIB libspdk_scheduler_gscheduler.a 00:02:02.187 SO libspdk_keyring_linux.so.1.0 00:02:02.187 SO libspdk_keyring_file.so.1.0 00:02:02.187 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:02.187 SO libspdk_scheduler_gscheduler.so.4.0 00:02:02.187 LIB libspdk_accel_error.a 00:02:02.187 LIB libspdk_accel_ioat.a 00:02:02.187 LIB libspdk_scheduler_dynamic.a 00:02:02.187 LIB libspdk_accel_iaa.a 00:02:02.187 SO libspdk_accel_error.so.2.0 00:02:02.187 SO libspdk_accel_ioat.so.6.0 00:02:02.187 SYMLINK libspdk_keyring_linux.so 00:02:02.187 SYMLINK libspdk_keyring_file.so 00:02:02.187 SO libspdk_scheduler_dynamic.so.4.0 00:02:02.187 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:02.446 SO libspdk_accel_iaa.so.3.0 00:02:02.446 SYMLINK libspdk_scheduler_gscheduler.so 00:02:02.446 LIB libspdk_accel_dsa.a 00:02:02.446 SYMLINK libspdk_accel_error.so 00:02:02.446 LIB libspdk_blob_bdev.a 00:02:02.446 SYMLINK libspdk_scheduler_dynamic.so 00:02:02.446 SYMLINK libspdk_accel_ioat.so 00:02:02.446 SYMLINK libspdk_accel_iaa.so 00:02:02.446 SO libspdk_accel_dsa.so.5.0 00:02:02.446 SO libspdk_blob_bdev.so.11.0 00:02:02.446 SYMLINK libspdk_accel_dsa.so 00:02:02.446 SYMLINK libspdk_blob_bdev.so 00:02:02.705 LIB libspdk_vfu_device.a 00:02:02.705 SO libspdk_vfu_device.so.3.0 00:02:02.705 CC module/bdev/gpt/gpt.o 00:02:02.705 CC module/bdev/error/vbdev_error.o 00:02:02.705 CC module/bdev/lvol/vbdev_lvol.o 00:02:02.705 CC module/bdev/gpt/vbdev_gpt.o 00:02:02.705 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:02.705 CC module/bdev/error/vbdev_error_rpc.o 00:02:02.705 CC module/bdev/delay/vbdev_delay.o 00:02:02.705 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:02.705 CC module/bdev/null/bdev_null.o 00:02:02.705 CC module/blobfs/bdev/blobfs_bdev.o 00:02:02.705 CC module/bdev/null/bdev_null_rpc.o 00:02:02.705 CC module/bdev/malloc/bdev_malloc.o 00:02:02.705 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:02.705 CC module/bdev/split/vbdev_split.o 00:02:02.705 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:02.705 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.705 CC module/bdev/nvme/bdev_nvme.o 00:02:02.705 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:02.705 CC module/bdev/nvme/nvme_rpc.o 00:02:02.705 CC module/bdev/ftl/bdev_ftl.o 00:02:02.705 CC module/bdev/passthru/vbdev_passthru.o 00:02:02.705 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:02.705 CC module/bdev/raid/bdev_raid.o 00:02:02.705 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:02.705 CC module/bdev/raid/bdev_raid_rpc.o 00:02:02.705 CC module/bdev/nvme/bdev_mdns_client.o 00:02:02.705 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:02.705 CC module/bdev/raid/bdev_raid_sb.o 00:02:02.705 CC module/bdev/nvme/vbdev_opal.o 00:02:02.705 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:02.705 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:02.705 CC module/bdev/raid/raid0.o 00:02:02.705 CC module/bdev/aio/bdev_aio.o 00:02:02.705 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:02.705 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:02.705 CC module/bdev/raid/raid1.o 00:02:02.705 CC module/bdev/raid/concat.o 00:02:02.705 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.705 CC module/bdev/aio/bdev_aio_rpc.o 00:02:02.705 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:02.705 CC module/bdev/iscsi/bdev_iscsi.o 00:02:02.705 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:02.705 SYMLINK libspdk_vfu_device.so 00:02:02.964 LIB libspdk_sock_posix.a 00:02:02.964 SO libspdk_sock_posix.so.6.0 00:02:02.964 LIB libspdk_blobfs_bdev.a 00:02:03.222 SO libspdk_blobfs_bdev.so.6.0 00:02:03.222 LIB libspdk_bdev_split.a 00:02:03.222 LIB libspdk_bdev_ftl.a 00:02:03.222 SYMLINK libspdk_sock_posix.so 00:02:03.222 LIB libspdk_bdev_null.a 00:02:03.222 SO libspdk_bdev_split.so.6.0 00:02:03.222 SO libspdk_bdev_ftl.so.6.0 00:02:03.222 SYMLINK libspdk_blobfs_bdev.so 00:02:03.222 LIB libspdk_bdev_error.a 00:02:03.222 SO libspdk_bdev_null.so.6.0 00:02:03.222 SO libspdk_bdev_error.so.6.0 00:02:03.222 LIB libspdk_bdev_gpt.a 00:02:03.222 SYMLINK libspdk_bdev_split.so 00:02:03.222 SYMLINK libspdk_bdev_ftl.so 00:02:03.222 SO libspdk_bdev_gpt.so.6.0 00:02:03.222 LIB libspdk_bdev_passthru.a 00:02:03.222 LIB libspdk_bdev_zone_block.a 00:02:03.222 SYMLINK libspdk_bdev_null.so 00:02:03.222 SYMLINK libspdk_bdev_error.so 00:02:03.222 LIB libspdk_bdev_aio.a 00:02:03.222 SO libspdk_bdev_passthru.so.6.0 00:02:03.222 SO libspdk_bdev_zone_block.so.6.0 00:02:03.222 SYMLINK libspdk_bdev_gpt.so 00:02:03.222 LIB libspdk_bdev_iscsi.a 00:02:03.222 SO libspdk_bdev_aio.so.6.0 00:02:03.222 LIB libspdk_bdev_delay.a 00:02:03.222 SO libspdk_bdev_iscsi.so.6.0 00:02:03.222 LIB libspdk_bdev_lvol.a 00:02:03.222 SYMLINK libspdk_bdev_passthru.so 00:02:03.222 SYMLINK libspdk_bdev_zone_block.so 00:02:03.222 LIB libspdk_bdev_malloc.a 00:02:03.222 SO libspdk_bdev_delay.so.6.0 00:02:03.222 SO libspdk_bdev_lvol.so.6.0 00:02:03.222 SYMLINK libspdk_bdev_aio.so 00:02:03.480 SO libspdk_bdev_malloc.so.6.0 00:02:03.480 SYMLINK libspdk_bdev_iscsi.so 00:02:03.480 SYMLINK libspdk_bdev_delay.so 00:02:03.480 SYMLINK libspdk_bdev_lvol.so 00:02:03.480 SYMLINK libspdk_bdev_malloc.so 00:02:03.480 LIB libspdk_bdev_virtio.a 00:02:03.480 SO libspdk_bdev_virtio.so.6.0 00:02:03.480 SYMLINK libspdk_bdev_virtio.so 00:02:03.739 LIB libspdk_bdev_raid.a 00:02:03.998 SO libspdk_bdev_raid.so.6.0 00:02:03.998 SYMLINK libspdk_bdev_raid.so 00:02:04.932 LIB libspdk_bdev_nvme.a 00:02:04.932 SO libspdk_bdev_nvme.so.7.0 00:02:05.189 SYMLINK libspdk_bdev_nvme.so 00:02:05.447 CC module/event/subsystems/iobuf/iobuf.o 00:02:05.447 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:05.447 CC module/event/subsystems/keyring/keyring.o 00:02:05.447 CC module/event/subsystems/vmd/vmd.o 00:02:05.447 CC module/event/subsystems/scheduler/scheduler.o 00:02:05.447 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:05.447 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:05.447 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:05.447 CC module/event/subsystems/sock/sock.o 00:02:05.706 LIB libspdk_event_keyring.a 00:02:05.706 LIB libspdk_event_vhost_blk.a 00:02:05.706 LIB libspdk_event_scheduler.a 00:02:05.706 LIB libspdk_event_vfu_tgt.a 00:02:05.706 LIB libspdk_event_vmd.a 00:02:05.706 LIB libspdk_event_sock.a 00:02:05.706 LIB libspdk_event_iobuf.a 00:02:05.706 SO libspdk_event_keyring.so.1.0 00:02:05.706 SO libspdk_event_vhost_blk.so.3.0 00:02:05.706 SO libspdk_event_scheduler.so.4.0 00:02:05.706 SO libspdk_event_vfu_tgt.so.3.0 00:02:05.706 SO libspdk_event_sock.so.5.0 00:02:05.706 SO libspdk_event_vmd.so.6.0 00:02:05.706 SO libspdk_event_iobuf.so.3.0 00:02:05.706 SYMLINK libspdk_event_keyring.so 00:02:05.706 SYMLINK libspdk_event_vhost_blk.so 00:02:05.706 SYMLINK libspdk_event_scheduler.so 00:02:05.706 SYMLINK libspdk_event_vfu_tgt.so 00:02:05.706 SYMLINK libspdk_event_sock.so 00:02:05.706 SYMLINK libspdk_event_vmd.so 00:02:05.706 SYMLINK libspdk_event_iobuf.so 00:02:05.964 CC module/event/subsystems/accel/accel.o 00:02:05.964 LIB libspdk_event_accel.a 00:02:06.222 SO libspdk_event_accel.so.6.0 00:02:06.222 SYMLINK libspdk_event_accel.so 00:02:06.510 CC module/event/subsystems/bdev/bdev.o 00:02:06.510 LIB libspdk_event_bdev.a 00:02:06.510 SO libspdk_event_bdev.so.6.0 00:02:06.510 SYMLINK libspdk_event_bdev.so 00:02:06.794 CC module/event/subsystems/nbd/nbd.o 00:02:06.794 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:06.794 CC module/event/subsystems/ublk/ublk.o 00:02:06.794 CC module/event/subsystems/scsi/scsi.o 00:02:06.794 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:07.052 LIB libspdk_event_ublk.a 00:02:07.052 LIB libspdk_event_nbd.a 00:02:07.052 LIB libspdk_event_scsi.a 00:02:07.052 SO libspdk_event_ublk.so.3.0 00:02:07.052 SO libspdk_event_nbd.so.6.0 00:02:07.052 SO libspdk_event_scsi.so.6.0 00:02:07.052 SYMLINK libspdk_event_ublk.so 00:02:07.052 SYMLINK libspdk_event_nbd.so 00:02:07.052 SYMLINK libspdk_event_scsi.so 00:02:07.052 LIB libspdk_event_nvmf.a 00:02:07.052 SO libspdk_event_nvmf.so.6.0 00:02:07.052 SYMLINK libspdk_event_nvmf.so 00:02:07.052 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:07.052 CC module/event/subsystems/iscsi/iscsi.o 00:02:07.310 LIB libspdk_event_vhost_scsi.a 00:02:07.310 SO libspdk_event_vhost_scsi.so.3.0 00:02:07.310 LIB libspdk_event_iscsi.a 00:02:07.310 SO libspdk_event_iscsi.so.6.0 00:02:07.310 SYMLINK libspdk_event_vhost_scsi.so 00:02:07.310 SYMLINK libspdk_event_iscsi.so 00:02:07.568 SO libspdk.so.6.0 00:02:07.568 SYMLINK libspdk.so 00:02:07.831 CXX app/trace/trace.o 00:02:07.831 TEST_HEADER include/spdk/accel.h 00:02:07.831 TEST_HEADER include/spdk/accel_module.h 00:02:07.831 TEST_HEADER include/spdk/assert.h 00:02:07.831 TEST_HEADER include/spdk/base64.h 00:02:07.831 TEST_HEADER include/spdk/barrier.h 00:02:07.831 CC app/spdk_lspci/spdk_lspci.o 00:02:07.831 TEST_HEADER include/spdk/bdev_module.h 00:02:07.831 TEST_HEADER include/spdk/bdev.h 00:02:07.831 CC test/rpc_client/rpc_client_test.o 00:02:07.831 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.831 CC app/spdk_nvme_identify/identify.o 00:02:07.831 TEST_HEADER include/spdk/bit_array.h 00:02:07.831 CC app/spdk_top/spdk_top.o 00:02:07.831 TEST_HEADER include/spdk/bit_pool.h 00:02:07.831 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.831 CC app/trace_record/trace_record.o 00:02:07.831 CC app/spdk_nvme_perf/perf.o 00:02:07.831 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.831 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.831 TEST_HEADER include/spdk/blobfs.h 00:02:07.831 TEST_HEADER include/spdk/blob.h 00:02:07.831 TEST_HEADER include/spdk/conf.h 00:02:07.831 TEST_HEADER include/spdk/config.h 00:02:07.831 TEST_HEADER include/spdk/cpuset.h 00:02:07.831 TEST_HEADER include/spdk/crc16.h 00:02:07.831 TEST_HEADER include/spdk/crc32.h 00:02:07.831 TEST_HEADER include/spdk/crc64.h 00:02:07.831 TEST_HEADER include/spdk/dif.h 00:02:07.831 TEST_HEADER include/spdk/dma.h 00:02:07.831 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.831 TEST_HEADER include/spdk/endian.h 00:02:07.831 TEST_HEADER include/spdk/env.h 00:02:07.831 TEST_HEADER include/spdk/event.h 00:02:07.831 TEST_HEADER include/spdk/fd_group.h 00:02:07.831 TEST_HEADER include/spdk/fd.h 00:02:07.831 TEST_HEADER include/spdk/file.h 00:02:07.831 TEST_HEADER include/spdk/ftl.h 00:02:07.831 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.831 TEST_HEADER include/spdk/hexlify.h 00:02:07.831 TEST_HEADER include/spdk/histogram_data.h 00:02:07.831 TEST_HEADER include/spdk/idxd.h 00:02:07.831 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.831 TEST_HEADER include/spdk/init.h 00:02:07.831 TEST_HEADER include/spdk/ioat.h 00:02:07.831 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.831 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.831 TEST_HEADER include/spdk/json.h 00:02:07.831 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.831 TEST_HEADER include/spdk/keyring.h 00:02:07.831 TEST_HEADER include/spdk/keyring_module.h 00:02:07.831 TEST_HEADER include/spdk/likely.h 00:02:07.831 TEST_HEADER include/spdk/log.h 00:02:07.831 TEST_HEADER include/spdk/lvol.h 00:02:07.831 TEST_HEADER include/spdk/memory.h 00:02:07.831 TEST_HEADER include/spdk/mmio.h 00:02:07.831 TEST_HEADER include/spdk/nbd.h 00:02:07.831 TEST_HEADER include/spdk/notify.h 00:02:07.831 TEST_HEADER include/spdk/nvme.h 00:02:07.831 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.831 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.831 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.831 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.831 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.831 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.831 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.831 TEST_HEADER include/spdk/nvmf.h 00:02:07.831 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.831 TEST_HEADER include/spdk/opal.h 00:02:07.831 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.831 TEST_HEADER include/spdk/opal_spec.h 00:02:07.831 TEST_HEADER include/spdk/pci_ids.h 00:02:07.831 TEST_HEADER include/spdk/pipe.h 00:02:07.831 TEST_HEADER include/spdk/queue.h 00:02:07.831 TEST_HEADER include/spdk/reduce.h 00:02:07.831 TEST_HEADER include/spdk/rpc.h 00:02:07.831 TEST_HEADER include/spdk/scheduler.h 00:02:07.831 TEST_HEADER include/spdk/scsi.h 00:02:07.831 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.831 TEST_HEADER include/spdk/stdinc.h 00:02:07.831 TEST_HEADER include/spdk/sock.h 00:02:07.831 TEST_HEADER include/spdk/string.h 00:02:07.831 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.831 TEST_HEADER include/spdk/thread.h 00:02:07.831 TEST_HEADER include/spdk/trace_parser.h 00:02:07.831 TEST_HEADER include/spdk/trace.h 00:02:07.831 TEST_HEADER include/spdk/tree.h 00:02:07.831 TEST_HEADER include/spdk/ublk.h 00:02:07.831 TEST_HEADER include/spdk/util.h 00:02:07.831 TEST_HEADER include/spdk/uuid.h 00:02:07.831 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.831 TEST_HEADER include/spdk/version.h 00:02:07.831 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.831 TEST_HEADER include/spdk/vhost.h 00:02:07.831 TEST_HEADER include/spdk/vmd.h 00:02:07.831 TEST_HEADER include/spdk/xor.h 00:02:07.831 TEST_HEADER include/spdk/zipf.h 00:02:07.831 CXX test/cpp_headers/accel.o 00:02:07.831 CXX test/cpp_headers/accel_module.o 00:02:07.831 CXX test/cpp_headers/assert.o 00:02:07.831 CXX test/cpp_headers/barrier.o 00:02:07.831 CXX test/cpp_headers/base64.o 00:02:07.831 CXX test/cpp_headers/bdev.o 00:02:07.831 CXX test/cpp_headers/bdev_module.o 00:02:07.831 CXX test/cpp_headers/bdev_zone.o 00:02:07.831 CC app/spdk_dd/spdk_dd.o 00:02:07.831 CXX test/cpp_headers/bit_array.o 00:02:07.831 CXX test/cpp_headers/bit_pool.o 00:02:07.831 CXX test/cpp_headers/blob_bdev.o 00:02:07.831 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.831 CXX test/cpp_headers/blobfs.o 00:02:07.831 CXX test/cpp_headers/blob.o 00:02:07.831 CXX test/cpp_headers/conf.o 00:02:07.831 CXX test/cpp_headers/config.o 00:02:07.831 CXX test/cpp_headers/cpuset.o 00:02:07.831 CXX test/cpp_headers/crc16.o 00:02:07.831 CC app/nvmf_tgt/nvmf_main.o 00:02:07.831 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.831 CXX test/cpp_headers/crc32.o 00:02:07.831 CC app/spdk_tgt/spdk_tgt.o 00:02:07.831 CC examples/util/zipf/zipf.o 00:02:07.831 CC test/env/vtophys/vtophys.o 00:02:07.831 CC test/thread/poller_perf/poller_perf.o 00:02:07.831 CC test/app/stub/stub.o 00:02:07.831 CC test/app/histogram_perf/histogram_perf.o 00:02:07.831 CC test/env/pci/pci_ut.o 00:02:07.831 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.831 CC test/env/memory/memory_ut.o 00:02:07.831 CC test/app/jsoncat/jsoncat.o 00:02:07.831 CC examples/ioat/verify/verify.o 00:02:07.831 CC app/fio/nvme/fio_plugin.o 00:02:07.831 CC examples/ioat/perf/perf.o 00:02:07.831 CC test/app/bdev_svc/bdev_svc.o 00:02:07.831 CC test/dma/test_dma/test_dma.o 00:02:07.831 CC app/fio/bdev/fio_plugin.o 00:02:08.093 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.093 LINK spdk_lspci 00:02:08.093 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:08.093 LINK rpc_client_test 00:02:08.093 LINK spdk_nvme_discover 00:02:08.093 CXX test/cpp_headers/crc64.o 00:02:08.093 LINK zipf 00:02:08.093 LINK poller_perf 00:02:08.093 LINK jsoncat 00:02:08.093 CXX test/cpp_headers/dif.o 00:02:08.093 LINK interrupt_tgt 00:02:08.093 LINK vtophys 00:02:08.093 LINK env_dpdk_post_init 00:02:08.093 LINK histogram_perf 00:02:08.093 CXX test/cpp_headers/dma.o 00:02:08.093 LINK nvmf_tgt 00:02:08.367 CXX test/cpp_headers/endian.o 00:02:08.367 CXX test/cpp_headers/env_dpdk.o 00:02:08.367 CXX test/cpp_headers/env.o 00:02:08.367 CXX test/cpp_headers/event.o 00:02:08.367 LINK spdk_trace_record 00:02:08.367 CXX test/cpp_headers/fd_group.o 00:02:08.367 CXX test/cpp_headers/fd.o 00:02:08.367 LINK stub 00:02:08.367 LINK iscsi_tgt 00:02:08.367 CXX test/cpp_headers/file.o 00:02:08.367 CXX test/cpp_headers/ftl.o 00:02:08.367 CXX test/cpp_headers/gpt_spec.o 00:02:08.367 CXX test/cpp_headers/hexlify.o 00:02:08.367 LINK spdk_tgt 00:02:08.367 CXX test/cpp_headers/histogram_data.o 00:02:08.367 CXX test/cpp_headers/idxd.o 00:02:08.367 LINK bdev_svc 00:02:08.367 CXX test/cpp_headers/idxd_spec.o 00:02:08.367 LINK verify 00:02:08.367 LINK ioat_perf 00:02:08.367 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:08.367 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.367 CXX test/cpp_headers/init.o 00:02:08.367 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.626 CXX test/cpp_headers/ioat.o 00:02:08.626 CXX test/cpp_headers/ioat_spec.o 00:02:08.626 LINK spdk_dd 00:02:08.626 LINK spdk_trace 00:02:08.626 CXX test/cpp_headers/iscsi_spec.o 00:02:08.626 CXX test/cpp_headers/json.o 00:02:08.626 CXX test/cpp_headers/jsonrpc.o 00:02:08.626 CXX test/cpp_headers/keyring.o 00:02:08.626 CXX test/cpp_headers/keyring_module.o 00:02:08.626 CXX test/cpp_headers/likely.o 00:02:08.626 CXX test/cpp_headers/log.o 00:02:08.626 CXX test/cpp_headers/lvol.o 00:02:08.626 CXX test/cpp_headers/memory.o 00:02:08.626 LINK pci_ut 00:02:08.626 CXX test/cpp_headers/mmio.o 00:02:08.626 CXX test/cpp_headers/nbd.o 00:02:08.626 CXX test/cpp_headers/notify.o 00:02:08.626 CXX test/cpp_headers/nvme.o 00:02:08.626 CXX test/cpp_headers/nvme_intel.o 00:02:08.626 CXX test/cpp_headers/nvme_ocssd.o 00:02:08.626 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:08.626 CXX test/cpp_headers/nvme_spec.o 00:02:08.626 CXX test/cpp_headers/nvme_zns.o 00:02:08.626 CXX test/cpp_headers/nvmf_cmd.o 00:02:08.626 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:08.626 CXX test/cpp_headers/nvmf.o 00:02:08.626 CXX test/cpp_headers/nvmf_spec.o 00:02:08.626 CXX test/cpp_headers/nvmf_transport.o 00:02:08.626 CXX test/cpp_headers/opal.o 00:02:08.626 CXX test/cpp_headers/opal_spec.o 00:02:08.626 CXX test/cpp_headers/pci_ids.o 00:02:08.886 LINK test_dma 00:02:08.886 CXX test/cpp_headers/pipe.o 00:02:08.886 LINK nvme_fuzz 00:02:08.886 CXX test/cpp_headers/queue.o 00:02:08.886 LINK spdk_nvme 00:02:08.886 CXX test/cpp_headers/reduce.o 00:02:08.886 CC examples/sock/hello_world/hello_sock.o 00:02:08.886 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.886 CXX test/cpp_headers/rpc.o 00:02:08.886 CC examples/vmd/led/led.o 00:02:08.886 CC examples/idxd/perf/perf.o 00:02:08.886 LINK spdk_bdev 00:02:09.144 CC test/event/event_perf/event_perf.o 00:02:09.144 CC examples/thread/thread/thread_ex.o 00:02:09.144 CXX test/cpp_headers/scheduler.o 00:02:09.144 CXX test/cpp_headers/scsi.o 00:02:09.144 CXX test/cpp_headers/scsi_spec.o 00:02:09.144 CXX test/cpp_headers/sock.o 00:02:09.144 CXX test/cpp_headers/stdinc.o 00:02:09.144 CXX test/cpp_headers/string.o 00:02:09.144 CXX test/cpp_headers/thread.o 00:02:09.144 CXX test/cpp_headers/trace.o 00:02:09.144 CXX test/cpp_headers/trace_parser.o 00:02:09.144 CC test/event/reactor/reactor.o 00:02:09.144 CC test/event/reactor_perf/reactor_perf.o 00:02:09.144 CXX test/cpp_headers/tree.o 00:02:09.144 CXX test/cpp_headers/ublk.o 00:02:09.144 CXX test/cpp_headers/util.o 00:02:09.144 CXX test/cpp_headers/uuid.o 00:02:09.144 CXX test/cpp_headers/version.o 00:02:09.144 CXX test/cpp_headers/vfio_user_pci.o 00:02:09.144 CXX test/cpp_headers/vfio_user_spec.o 00:02:09.144 CXX test/cpp_headers/vhost.o 00:02:09.144 CC test/event/app_repeat/app_repeat.o 00:02:09.144 CXX test/cpp_headers/vmd.o 00:02:09.144 CXX test/cpp_headers/xor.o 00:02:09.144 CXX test/cpp_headers/zipf.o 00:02:09.144 LINK lsvmd 00:02:09.144 LINK mem_callbacks 00:02:09.144 LINK vhost_fuzz 00:02:09.410 CC app/vhost/vhost.o 00:02:09.410 LINK led 00:02:09.410 LINK spdk_nvme_perf 00:02:09.410 CC test/event/scheduler/scheduler.o 00:02:09.410 LINK event_perf 00:02:09.410 LINK spdk_nvme_identify 00:02:09.410 LINK spdk_top 00:02:09.410 LINK reactor_perf 00:02:09.410 LINK hello_sock 00:02:09.410 LINK reactor 00:02:09.410 CC test/nvme/reset/reset.o 00:02:09.410 CC test/nvme/sgl/sgl.o 00:02:09.410 CC test/nvme/reserve/reserve.o 00:02:09.410 CC test/nvme/e2edp/nvme_dp.o 00:02:09.410 CC test/nvme/simple_copy/simple_copy.o 00:02:09.410 CC test/nvme/aer/aer.o 00:02:09.410 CC test/nvme/startup/startup.o 00:02:09.410 LINK app_repeat 00:02:09.410 CC test/nvme/overhead/overhead.o 00:02:09.410 CC test/nvme/err_injection/err_injection.o 00:02:09.668 LINK thread 00:02:09.668 CC test/nvme/connect_stress/connect_stress.o 00:02:09.668 CC test/accel/dif/dif.o 00:02:09.668 CC test/blobfs/mkfs/mkfs.o 00:02:09.668 CC test/nvme/compliance/nvme_compliance.o 00:02:09.668 CC test/nvme/boot_partition/boot_partition.o 00:02:09.668 CC test/nvme/fused_ordering/fused_ordering.o 00:02:09.668 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:09.668 LINK idxd_perf 00:02:09.668 CC test/lvol/esnap/esnap.o 00:02:09.668 LINK vhost 00:02:09.668 CC test/nvme/cuse/cuse.o 00:02:09.668 CC test/nvme/fdp/fdp.o 00:02:09.668 LINK scheduler 00:02:09.954 LINK err_injection 00:02:09.954 LINK connect_stress 00:02:09.954 LINK boot_partition 00:02:09.954 LINK doorbell_aers 00:02:09.954 LINK startup 00:02:09.954 LINK reserve 00:02:09.954 LINK fused_ordering 00:02:09.954 LINK mkfs 00:02:09.954 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.955 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.955 CC examples/nvme/abort/abort.o 00:02:09.955 CC examples/nvme/hotplug/hotplug.o 00:02:09.955 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.955 CC examples/nvme/arbitration/arbitration.o 00:02:09.955 CC examples/nvme/hello_world/hello_world.o 00:02:09.955 CC examples/nvme/reconnect/reconnect.o 00:02:09.955 LINK memory_ut 00:02:09.955 LINK simple_copy 00:02:09.955 LINK sgl 00:02:09.955 LINK reset 00:02:09.955 LINK nvme_dp 00:02:09.955 LINK nvme_compliance 00:02:09.955 LINK aer 00:02:10.212 LINK overhead 00:02:10.212 LINK fdp 00:02:10.212 LINK cmb_copy 00:02:10.212 CC examples/accel/perf/accel_perf.o 00:02:10.212 LINK dif 00:02:10.212 LINK hotplug 00:02:10.212 CC examples/blob/cli/blobcli.o 00:02:10.212 CC examples/blob/hello_world/hello_blob.o 00:02:10.212 LINK pmr_persistence 00:02:10.212 LINK hello_world 00:02:10.470 LINK reconnect 00:02:10.470 LINK abort 00:02:10.470 LINK arbitration 00:02:10.470 LINK hello_blob 00:02:10.470 LINK nvme_manage 00:02:10.727 CC test/bdev/bdevio/bdevio.o 00:02:10.727 LINK accel_perf 00:02:10.727 LINK blobcli 00:02:10.983 LINK iscsi_fuzz 00:02:10.983 CC examples/bdev/hello_world/hello_bdev.o 00:02:10.983 CC examples/bdev/bdevperf/bdevperf.o 00:02:10.983 LINK bdevio 00:02:11.240 LINK cuse 00:02:11.240 LINK hello_bdev 00:02:11.806 LINK bdevperf 00:02:12.064 CC examples/nvmf/nvmf/nvmf.o 00:02:12.631 LINK nvmf 00:02:14.532 LINK esnap 00:02:14.791 00:02:14.791 real 0m48.819s 00:02:14.791 user 10m8.876s 00:02:14.791 sys 2m28.692s 00:02:14.791 16:51:14 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:14.791 16:51:14 make -- common/autotest_common.sh@10 -- $ set +x 00:02:14.791 ************************************ 00:02:14.791 END TEST make 00:02:14.791 ************************************ 00:02:14.791 16:51:14 -- common/autotest_common.sh@1142 -- $ return 0 00:02:14.791 16:51:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:14.791 16:51:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:14.791 16:51:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:14.791 16:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.791 16:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:14.791 16:51:14 -- pm/common@44 -- $ pid=912928 00:02:14.791 16:51:14 -- pm/common@50 -- $ kill -TERM 912928 00:02:14.791 16:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.791 16:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:14.791 16:51:14 -- pm/common@44 -- $ pid=912930 00:02:14.791 16:51:14 -- pm/common@50 -- $ kill -TERM 912930 00:02:14.791 16:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.791 16:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:14.791 16:51:14 -- pm/common@44 -- $ pid=912932 00:02:14.791 16:51:14 -- pm/common@50 -- $ kill -TERM 912932 00:02:14.791 16:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.791 16:51:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:14.791 16:51:14 -- pm/common@44 -- $ pid=912960 00:02:14.791 16:51:14 -- pm/common@50 -- $ sudo -E kill -TERM 912960 00:02:15.049 16:51:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:15.049 16:51:14 -- nvmf/common.sh@7 -- # uname -s 00:02:15.049 16:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:15.049 16:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:15.049 16:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:15.049 16:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:15.049 16:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:15.049 16:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:15.049 16:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:15.049 16:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:15.049 16:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:15.049 16:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:15.049 16:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:15.049 16:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:15.049 16:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:15.049 16:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:15.049 16:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:15.049 16:51:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:15.049 16:51:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:15.049 16:51:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:15.049 16:51:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.049 16:51:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.049 16:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.049 16:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.050 16:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.050 16:51:14 -- paths/export.sh@5 -- # export PATH 00:02:15.050 16:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.050 16:51:14 -- nvmf/common.sh@47 -- # : 0 00:02:15.050 16:51:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:15.050 16:51:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:15.050 16:51:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:15.050 16:51:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:15.050 16:51:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:15.050 16:51:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:15.050 16:51:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:15.050 16:51:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:15.050 16:51:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:15.050 16:51:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:15.050 16:51:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:15.050 16:51:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:15.050 16:51:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:15.050 16:51:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:15.050 16:51:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:15.050 16:51:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:15.050 16:51:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:15.050 16:51:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:15.050 16:51:14 -- spdk/autotest.sh@48 -- # udevadm_pid=969008 00:02:15.050 16:51:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:15.050 16:51:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:15.050 16:51:14 -- pm/common@17 -- # local monitor 00:02:15.050 16:51:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.050 16:51:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.050 16:51:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.050 16:51:14 -- pm/common@21 -- # date +%s 00:02:15.050 16:51:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.050 16:51:14 -- pm/common@21 -- # date +%s 00:02:15.050 16:51:14 -- pm/common@25 -- # sleep 1 00:02:15.050 16:51:14 -- pm/common@21 -- # date +%s 00:02:15.050 16:51:14 -- pm/common@21 -- # date +%s 00:02:15.050 16:51:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720795874 00:02:15.050 16:51:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720795874 00:02:15.050 16:51:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720795874 00:02:15.050 16:51:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720795874 00:02:15.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720795874_collect-vmstat.pm.log 00:02:15.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720795874_collect-cpu-load.pm.log 00:02:15.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720795874_collect-cpu-temp.pm.log 00:02:15.050 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720795874_collect-bmc-pm.bmc.pm.log 00:02:15.985 16:51:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:15.985 16:51:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:15.985 16:51:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:15.985 16:51:15 -- common/autotest_common.sh@10 -- # set +x 00:02:15.985 16:51:15 -- spdk/autotest.sh@59 -- # create_test_list 00:02:15.985 16:51:15 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:15.985 16:51:15 -- common/autotest_common.sh@10 -- # set +x 00:02:15.985 16:51:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:15.985 16:51:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.985 16:51:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.985 16:51:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:15.985 16:51:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.985 16:51:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:15.985 16:51:15 -- common/autotest_common.sh@1455 -- # uname 00:02:15.985 16:51:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:15.985 16:51:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:15.985 16:51:15 -- common/autotest_common.sh@1475 -- # uname 00:02:15.985 16:51:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:15.985 16:51:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:15.985 16:51:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:15.985 16:51:15 -- spdk/autotest.sh@72 -- # hash lcov 00:02:15.985 16:51:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:15.985 16:51:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:15.985 --rc lcov_branch_coverage=1 00:02:15.985 --rc lcov_function_coverage=1 00:02:15.985 --rc genhtml_branch_coverage=1 00:02:15.985 --rc genhtml_function_coverage=1 00:02:15.985 --rc genhtml_legend=1 00:02:15.985 --rc geninfo_all_blocks=1 00:02:15.985 ' 00:02:15.985 16:51:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:15.985 --rc lcov_branch_coverage=1 00:02:15.985 --rc lcov_function_coverage=1 00:02:15.985 --rc genhtml_branch_coverage=1 00:02:15.985 --rc genhtml_function_coverage=1 00:02:15.985 --rc genhtml_legend=1 00:02:15.985 --rc geninfo_all_blocks=1 00:02:15.985 ' 00:02:15.985 16:51:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:15.985 --rc lcov_branch_coverage=1 00:02:15.985 --rc lcov_function_coverage=1 00:02:15.985 --rc genhtml_branch_coverage=1 00:02:15.985 --rc genhtml_function_coverage=1 00:02:15.985 --rc genhtml_legend=1 00:02:15.985 --rc geninfo_all_blocks=1 00:02:15.985 --no-external' 00:02:15.985 16:51:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:15.985 --rc lcov_branch_coverage=1 00:02:15.985 --rc lcov_function_coverage=1 00:02:15.985 --rc genhtml_branch_coverage=1 00:02:15.985 --rc genhtml_function_coverage=1 00:02:15.985 --rc genhtml_legend=1 00:02:15.985 --rc geninfo_all_blocks=1 00:02:15.985 --no-external' 00:02:15.986 16:51:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:16.243 lcov: LCOV version 1.14 00:02:16.243 16:51:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:31.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:31.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:46.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:46.036 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:46.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:46.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:46.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:46.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:49.355 16:51:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:49.355 16:51:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:49.355 16:51:48 -- common/autotest_common.sh@10 -- # set +x 00:02:49.355 16:51:48 -- spdk/autotest.sh@91 -- # rm -f 00:02:49.355 16:51:48 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.293 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:02:50.293 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:50.293 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:50.550 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:50.550 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:50.550 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:50.550 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:50.550 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:50.550 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:50.550 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:50.550 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:50.550 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:50.550 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:50.550 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:50.550 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:50.550 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:50.550 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:50.826 16:51:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:50.826 16:51:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:50.826 16:51:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:50.826 16:51:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:50.826 16:51:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:50.826 16:51:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:50.826 16:51:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:50.826 16:51:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.826 16:51:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:50.826 16:51:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:50.826 16:51:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:50.826 16:51:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:50.826 16:51:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:50.826 16:51:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:50.826 16:51:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:50.826 No valid GPT data, bailing 00:02:50.826 16:51:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:50.826 16:51:50 -- scripts/common.sh@391 -- # pt= 00:02:50.826 16:51:50 -- scripts/common.sh@392 -- # return 1 00:02:50.826 16:51:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:50.826 1+0 records in 00:02:50.826 1+0 records out 00:02:50.826 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378984 s, 277 MB/s 00:02:50.826 16:51:50 -- spdk/autotest.sh@118 -- # sync 00:02:50.826 16:51:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:50.826 16:51:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:50.826 16:51:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:52.764 16:51:52 -- spdk/autotest.sh@124 -- # uname -s 00:02:52.764 16:51:52 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:52.764 16:51:52 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.764 16:51:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.764 16:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.764 16:51:52 -- common/autotest_common.sh@10 -- # set +x 00:02:52.764 ************************************ 00:02:52.764 START TEST setup.sh 00:02:52.764 ************************************ 00:02:52.764 16:51:52 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:52.764 * Looking for test storage... 00:02:52.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.764 16:51:52 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:52.764 16:51:52 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:52.764 16:51:52 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.764 16:51:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.764 16:51:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.764 16:51:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:52.764 ************************************ 00:02:52.764 START TEST acl 00:02:52.764 ************************************ 00:02:52.764 16:51:52 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.764 * Looking for test storage... 00:02:53.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:53.020 16:51:52 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.020 16:51:52 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.021 16:51:52 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.021 16:51:52 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.021 16:51:52 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:53.021 16:51:52 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:53.021 16:51:52 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:53.021 16:51:52 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:53.021 16:51:52 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:53.021 16:51:52 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.021 16:51:52 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.394 16:51:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:54.394 16:51:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:54.394 16:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.394 16:51:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:54.394 16:51:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.394 16:51:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:55.766 Hugepages 00:02:55.766 node hugesize free / total 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 00:02:55.766 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:55.766 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:55.767 16:51:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:55.767 16:51:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.767 16:51:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.767 16:51:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:55.767 ************************************ 00:02:55.767 START TEST denied 00:02:55.767 ************************************ 00:02:55.767 16:51:55 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:55.767 16:51:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:02:55.767 16:51:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:55.767 16:51:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:02:55.767 16:51:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.767 16:51:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.143 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.143 16:51:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.675 00:02:59.675 real 0m4.077s 00:02:59.675 user 0m1.231s 00:02:59.675 sys 0m1.930s 00:02:59.675 16:51:59 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.675 16:51:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:59.675 ************************************ 00:02:59.675 END TEST denied 00:02:59.675 ************************************ 00:02:59.675 16:51:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:59.675 16:51:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:59.675 16:51:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.675 16:51:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.675 16:51:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.933 ************************************ 00:02:59.933 START TEST allowed 00:02:59.933 ************************************ 00:02:59.933 16:51:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:59.933 16:51:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:02:59.933 16:51:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:59.933 16:51:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:02:59.933 16:51:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.933 16:51:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.462 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:02.462 16:52:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:02.462 16:52:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:02.462 16:52:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:02.462 16:52:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.462 16:52:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.869 00:03:03.869 real 0m3.913s 00:03:03.869 user 0m1.022s 00:03:03.869 sys 0m1.760s 00:03:03.869 16:52:03 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.869 16:52:03 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:03.869 ************************************ 00:03:03.869 END TEST allowed 00:03:03.869 ************************************ 00:03:03.869 16:52:03 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:03.869 00:03:03.869 real 0m10.914s 00:03:03.869 user 0m3.419s 00:03:03.869 sys 0m5.545s 00:03:03.869 16:52:03 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.869 16:52:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.869 ************************************ 00:03:03.869 END TEST acl 00:03:03.869 ************************************ 00:03:03.869 16:52:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:03.869 16:52:03 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:03.869 16:52:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.869 16:52:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.869 16:52:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.869 ************************************ 00:03:03.869 START TEST hugepages 00:03:03.869 ************************************ 00:03:03.869 16:52:03 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:03.869 * Looking for test storage... 00:03:03.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28018440 kB' 'MemAvailable: 31585320 kB' 'Buffers: 2704 kB' 'Cached: 9405316 kB' 'SwapCached: 0 kB' 'Active: 6406892 kB' 'Inactive: 3505240 kB' 'Active(anon): 6017352 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507356 kB' 'Mapped: 197048 kB' 'Shmem: 5513240 kB' 'KReclaimable: 167848 kB' 'Slab: 491212 kB' 'SReclaimable: 167848 kB' 'SUnreclaim: 323364 kB' 'KernelStack: 12416 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7130928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.869 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.870 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:03.871 16:52:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:03.871 16:52:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.871 16:52:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.871 16:52:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.871 ************************************ 00:03:03.871 START TEST default_setup 00:03:03.871 ************************************ 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.871 16:52:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.247 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.247 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:05.247 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:06.184 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30128820 kB' 'MemAvailable: 33695664 kB' 'Buffers: 2704 kB' 'Cached: 9405400 kB' 'SwapCached: 0 kB' 'Active: 6424444 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034904 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524744 kB' 'Mapped: 197176 kB' 'Shmem: 5513324 kB' 'KReclaimable: 167776 kB' 'Slab: 490512 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322736 kB' 'KernelStack: 12272 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.446 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.447 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30131572 kB' 'MemAvailable: 33698416 kB' 'Buffers: 2704 kB' 'Cached: 9405404 kB' 'SwapCached: 0 kB' 'Active: 6424216 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034676 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524568 kB' 'Mapped: 197076 kB' 'Shmem: 5513328 kB' 'KReclaimable: 167776 kB' 'Slab: 490504 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322728 kB' 'KernelStack: 12416 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.448 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.449 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30131908 kB' 'MemAvailable: 33698752 kB' 'Buffers: 2704 kB' 'Cached: 9405420 kB' 'SwapCached: 0 kB' 'Active: 6424140 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034600 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524468 kB' 'Mapped: 197076 kB' 'Shmem: 5513344 kB' 'KReclaimable: 167776 kB' 'Slab: 490556 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322780 kB' 'KernelStack: 12384 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.450 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.451 nr_hugepages=1024 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.451 resv_hugepages=0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.451 surplus_hugepages=0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.451 anon_hugepages=0 00:03:06.451 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30132016 kB' 'MemAvailable: 33698860 kB' 'Buffers: 2704 kB' 'Cached: 9405444 kB' 'SwapCached: 0 kB' 'Active: 6424436 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034896 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524720 kB' 'Mapped: 197076 kB' 'Shmem: 5513368 kB' 'KReclaimable: 167776 kB' 'Slab: 490556 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322780 kB' 'KernelStack: 12368 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7147896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20192724 kB' 'MemUsed: 4379632 kB' 'SwapCached: 0 kB' 'Active: 1635332 kB' 'Inactive: 72492 kB' 'Active(anon): 1506064 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370396 kB' 'Mapped: 68088 kB' 'AnonPages: 340580 kB' 'Shmem: 1168636 kB' 'KernelStack: 7400 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201112 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.455 node0=1024 expecting 1024 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.455 00:03:06.455 real 0m2.480s 00:03:06.455 user 0m0.642s 00:03:06.455 sys 0m0.968s 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.455 16:52:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 END TEST default_setup 00:03:06.455 ************************************ 00:03:06.455 16:52:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:06.455 16:52:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:06.455 16:52:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.455 16:52:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.455 16:52:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 START TEST per_node_1G_alloc 00:03:06.455 ************************************ 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.455 16:52:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.835 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.835 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.835 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.835 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.835 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.835 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.835 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.835 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.835 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.835 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.835 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.835 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.835 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.835 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.835 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.835 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.835 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30139272 kB' 'MemAvailable: 33706116 kB' 'Buffers: 2704 kB' 'Cached: 9405520 kB' 'SwapCached: 0 kB' 'Active: 6425244 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035704 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525496 kB' 'Mapped: 197224 kB' 'Shmem: 5513444 kB' 'KReclaimable: 167776 kB' 'Slab: 490688 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322912 kB' 'KernelStack: 12384 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.835 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.836 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30139580 kB' 'MemAvailable: 33706424 kB' 'Buffers: 2704 kB' 'Cached: 9405524 kB' 'SwapCached: 0 kB' 'Active: 6424632 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035092 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524904 kB' 'Mapped: 197196 kB' 'Shmem: 5513448 kB' 'KReclaimable: 167776 kB' 'Slab: 490688 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322912 kB' 'KernelStack: 12400 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.837 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30141316 kB' 'MemAvailable: 33708160 kB' 'Buffers: 2704 kB' 'Cached: 9405524 kB' 'SwapCached: 0 kB' 'Active: 6424744 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035204 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525000 kB' 'Mapped: 197084 kB' 'Shmem: 5513448 kB' 'KReclaimable: 167776 kB' 'Slab: 490728 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322952 kB' 'KernelStack: 12400 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.838 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.103 nr_hugepages=1024 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.103 resv_hugepages=0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.103 surplus_hugepages=0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.103 anon_hugepages=0 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30142004 kB' 'MemAvailable: 33708848 kB' 'Buffers: 2704 kB' 'Cached: 9405564 kB' 'SwapCached: 0 kB' 'Active: 6424440 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034900 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524648 kB' 'Mapped: 197084 kB' 'Shmem: 5513488 kB' 'KReclaimable: 167776 kB' 'Slab: 490728 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 322952 kB' 'KernelStack: 12400 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.103 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.104 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.105 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21245864 kB' 'MemUsed: 3326492 kB' 'SwapCached: 0 kB' 'Active: 1634984 kB' 'Inactive: 72492 kB' 'Active(anon): 1505716 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370448 kB' 'Mapped: 68088 kB' 'AnonPages: 340192 kB' 'Shmem: 1168688 kB' 'KernelStack: 7416 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201188 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.106 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8900584 kB' 'MemUsed: 10553732 kB' 'SwapCached: 0 kB' 'Active: 4789152 kB' 'Inactive: 3432748 kB' 'Active(anon): 4528880 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037824 kB' 'Mapped: 128996 kB' 'AnonPages: 184140 kB' 'Shmem: 4344804 kB' 'KernelStack: 4984 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116740 kB' 'Slab: 289540 kB' 'SReclaimable: 116740 kB' 'SUnreclaim: 172800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.107 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.108 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.109 node0=512 expecting 512 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:08.109 node1=512 expecting 512 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:08.109 00:03:08.109 real 0m1.550s 00:03:08.109 user 0m0.628s 00:03:08.109 sys 0m0.899s 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.109 16:52:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:08.109 ************************************ 00:03:08.109 END TEST per_node_1G_alloc 00:03:08.109 ************************************ 00:03:08.109 16:52:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.109 16:52:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:08.109 16:52:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.109 16:52:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.109 16:52:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.109 ************************************ 00:03:08.109 START TEST even_2G_alloc 00:03:08.109 ************************************ 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.109 16:52:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.490 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.490 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.490 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.490 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.490 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.490 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.490 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.490 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.490 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.490 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.490 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.490 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.490 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.490 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.490 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.490 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.490 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30125692 kB' 'MemAvailable: 33692536 kB' 'Buffers: 2704 kB' 'Cached: 9405652 kB' 'SwapCached: 0 kB' 'Active: 6425060 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035520 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525068 kB' 'Mapped: 197112 kB' 'Shmem: 5513576 kB' 'KReclaimable: 167776 kB' 'Slab: 490776 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 323000 kB' 'KernelStack: 12432 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.490 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.491 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30127872 kB' 'MemAvailable: 33694716 kB' 'Buffers: 2704 kB' 'Cached: 9405656 kB' 'SwapCached: 0 kB' 'Active: 6424684 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035144 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524696 kB' 'Mapped: 197092 kB' 'Shmem: 5513580 kB' 'KReclaimable: 167776 kB' 'Slab: 490776 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 323000 kB' 'KernelStack: 12400 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.492 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30127876 kB' 'MemAvailable: 33694720 kB' 'Buffers: 2704 kB' 'Cached: 9405656 kB' 'SwapCached: 0 kB' 'Active: 6424340 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034800 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524352 kB' 'Mapped: 197092 kB' 'Shmem: 5513580 kB' 'KReclaimable: 167776 kB' 'Slab: 490828 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 323052 kB' 'KernelStack: 12384 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.493 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.494 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.495 nr_hugepages=1024 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.495 resv_hugepages=0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.495 surplus_hugepages=0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.495 anon_hugepages=0 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30127876 kB' 'MemAvailable: 33694720 kB' 'Buffers: 2704 kB' 'Cached: 9405696 kB' 'SwapCached: 0 kB' 'Active: 6424728 kB' 'Inactive: 3505240 kB' 'Active(anon): 6035188 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524704 kB' 'Mapped: 197092 kB' 'Shmem: 5513620 kB' 'KReclaimable: 167776 kB' 'Slab: 490828 kB' 'SReclaimable: 167776 kB' 'SUnreclaim: 323052 kB' 'KernelStack: 12416 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7148776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21229192 kB' 'MemUsed: 3343164 kB' 'SwapCached: 0 kB' 'Active: 1635916 kB' 'Inactive: 72492 kB' 'Active(anon): 1506648 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370576 kB' 'Mapped: 68088 kB' 'AnonPages: 340992 kB' 'Shmem: 1168816 kB' 'KernelStack: 7544 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201256 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.498 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8898432 kB' 'MemUsed: 10555884 kB' 'SwapCached: 0 kB' 'Active: 4789088 kB' 'Inactive: 3432748 kB' 'Active(anon): 4528816 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037828 kB' 'Mapped: 129028 kB' 'AnonPages: 184036 kB' 'Shmem: 4344808 kB' 'KernelStack: 4952 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116740 kB' 'Slab: 289572 kB' 'SReclaimable: 116740 kB' 'SUnreclaim: 172832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.757 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.758 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.759 node0=512 expecting 512 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:09.759 node1=512 expecting 512 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:09.759 00:03:09.759 real 0m1.560s 00:03:09.759 user 0m0.686s 00:03:09.759 sys 0m0.853s 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.759 16:52:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.759 ************************************ 00:03:09.759 END TEST even_2G_alloc 00:03:09.759 ************************************ 00:03:09.759 16:52:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:09.759 16:52:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:09.759 16:52:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.759 16:52:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.759 16:52:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.759 ************************************ 00:03:09.759 START TEST odd_alloc 00:03:09.759 ************************************ 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.759 16:52:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.690 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.690 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.690 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.690 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.690 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.690 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.690 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.690 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.690 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.690 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.690 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.690 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.952 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.952 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.952 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.952 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.952 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30150116 kB' 'MemAvailable: 33716940 kB' 'Buffers: 2704 kB' 'Cached: 9405792 kB' 'SwapCached: 0 kB' 'Active: 6421972 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032432 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 196268 kB' 'Shmem: 5513716 kB' 'KReclaimable: 167736 kB' 'Slab: 490996 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323260 kB' 'KernelStack: 12384 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7135492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.952 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.953 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30150008 kB' 'MemAvailable: 33716832 kB' 'Buffers: 2704 kB' 'Cached: 9405796 kB' 'SwapCached: 0 kB' 'Active: 6422152 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032612 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522164 kB' 'Mapped: 196252 kB' 'Shmem: 5513720 kB' 'KReclaimable: 167736 kB' 'Slab: 490972 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323236 kB' 'KernelStack: 12368 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7135512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.954 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30150444 kB' 'MemAvailable: 33717268 kB' 'Buffers: 2704 kB' 'Cached: 9405812 kB' 'SwapCached: 0 kB' 'Active: 6422160 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032620 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522180 kB' 'Mapped: 196252 kB' 'Shmem: 5513736 kB' 'KReclaimable: 167736 kB' 'Slab: 490980 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323244 kB' 'KernelStack: 12368 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7135532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.955 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.956 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:10.957 nr_hugepages=1025 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.957 resv_hugepages=0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.957 surplus_hugepages=0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.957 anon_hugepages=0 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30150484 kB' 'MemAvailable: 33717308 kB' 'Buffers: 2704 kB' 'Cached: 9405832 kB' 'SwapCached: 0 kB' 'Active: 6422200 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032660 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522180 kB' 'Mapped: 196252 kB' 'Shmem: 5513756 kB' 'KReclaimable: 167736 kB' 'Slab: 490984 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323248 kB' 'KernelStack: 12368 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7135552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.957 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.958 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.218 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21241224 kB' 'MemUsed: 3331132 kB' 'SwapCached: 0 kB' 'Active: 1634464 kB' 'Inactive: 72492 kB' 'Active(anon): 1505196 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370720 kB' 'Mapped: 68088 kB' 'AnonPages: 339452 kB' 'Shmem: 1168960 kB' 'KernelStack: 7416 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201428 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.219 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8909036 kB' 'MemUsed: 10545280 kB' 'SwapCached: 0 kB' 'Active: 4788196 kB' 'Inactive: 3432748 kB' 'Active(anon): 4527924 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037840 kB' 'Mapped: 128164 kB' 'AnonPages: 183164 kB' 'Shmem: 4344820 kB' 'KernelStack: 4952 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116700 kB' 'Slab: 289556 kB' 'SReclaimable: 116700 kB' 'SUnreclaim: 172856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.220 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:11.221 node0=512 expecting 513 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:11.221 node1=513 expecting 512 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:11.221 00:03:11.221 real 0m1.449s 00:03:11.221 user 0m0.610s 00:03:11.221 sys 0m0.811s 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.221 16:52:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.221 ************************************ 00:03:11.221 END TEST odd_alloc 00:03:11.221 ************************************ 00:03:11.221 16:52:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.221 16:52:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:11.221 16:52:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.221 16:52:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.221 16:52:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.221 ************************************ 00:03:11.221 START TEST custom_alloc 00:03:11.221 ************************************ 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.222 16:52:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.627 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.627 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.627 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.627 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.627 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.627 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.627 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.627 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.627 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.627 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.627 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.627 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.627 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.627 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.627 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.627 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.627 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.627 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29068040 kB' 'MemAvailable: 32634864 kB' 'Buffers: 2704 kB' 'Cached: 9405920 kB' 'SwapCached: 0 kB' 'Active: 6422152 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032612 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521980 kB' 'Mapped: 196292 kB' 'Shmem: 5513844 kB' 'KReclaimable: 167736 kB' 'Slab: 490956 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323220 kB' 'KernelStack: 12352 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7135636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.628 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29068212 kB' 'MemAvailable: 32635036 kB' 'Buffers: 2704 kB' 'Cached: 9405924 kB' 'SwapCached: 0 kB' 'Active: 6422280 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032740 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522120 kB' 'Mapped: 196264 kB' 'Shmem: 5513848 kB' 'KReclaimable: 167736 kB' 'Slab: 490956 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323220 kB' 'KernelStack: 12400 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7135656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.629 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.630 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29068632 kB' 'MemAvailable: 32635456 kB' 'Buffers: 2704 kB' 'Cached: 9405936 kB' 'SwapCached: 0 kB' 'Active: 6422228 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032688 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522056 kB' 'Mapped: 196264 kB' 'Shmem: 5513860 kB' 'KReclaimable: 167736 kB' 'Slab: 491064 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323328 kB' 'KernelStack: 12384 kB' 'PageTables: 7392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7135676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.631 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.632 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:12.633 nr_hugepages=1536 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.633 resv_hugepages=0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.633 surplus_hugepages=0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.633 anon_hugepages=0 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29068272 kB' 'MemAvailable: 32635096 kB' 'Buffers: 2704 kB' 'Cached: 9405964 kB' 'SwapCached: 0 kB' 'Active: 6422292 kB' 'Inactive: 3505240 kB' 'Active(anon): 6032752 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522092 kB' 'Mapped: 196264 kB' 'Shmem: 5513888 kB' 'KReclaimable: 167736 kB' 'Slab: 491064 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323328 kB' 'KernelStack: 12400 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7135696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.633 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.634 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21238960 kB' 'MemUsed: 3333396 kB' 'SwapCached: 0 kB' 'Active: 1634860 kB' 'Inactive: 72492 kB' 'Active(anon): 1505592 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370844 kB' 'Mapped: 68088 kB' 'AnonPages: 339672 kB' 'Shmem: 1169084 kB' 'KernelStack: 7464 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201388 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.635 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.636 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7829312 kB' 'MemUsed: 11625004 kB' 'SwapCached: 0 kB' 'Active: 4787472 kB' 'Inactive: 3432748 kB' 'Active(anon): 4527200 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8037844 kB' 'Mapped: 128176 kB' 'AnonPages: 182436 kB' 'Shmem: 4344824 kB' 'KernelStack: 4936 kB' 'PageTables: 3304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116700 kB' 'Slab: 289676 kB' 'SReclaimable: 116700 kB' 'SUnreclaim: 172976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.637 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.638 node0=512 expecting 512 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:12.638 node1=1024 expecting 1024 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:12.638 00:03:12.638 real 0m1.510s 00:03:12.638 user 0m0.628s 00:03:12.638 sys 0m0.858s 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.638 16:52:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.638 ************************************ 00:03:12.638 END TEST custom_alloc 00:03:12.638 ************************************ 00:03:12.638 16:52:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:12.638 16:52:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:12.638 16:52:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.638 16:52:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.638 16:52:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.895 ************************************ 00:03:12.895 START TEST no_shrink_alloc 00:03:12.895 ************************************ 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.895 16:52:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.832 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.832 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.832 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.832 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.832 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.832 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.832 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.832 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.832 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.832 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.832 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.832 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.832 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.832 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.832 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.832 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.832 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.107 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30125612 kB' 'MemAvailable: 33692436 kB' 'Buffers: 2704 kB' 'Cached: 9406044 kB' 'SwapCached: 0 kB' 'Active: 6423460 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033920 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523108 kB' 'Mapped: 196240 kB' 'Shmem: 5513968 kB' 'KReclaimable: 167736 kB' 'Slab: 491192 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323456 kB' 'KernelStack: 12384 kB' 'PageTables: 7352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7135724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.108 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30126104 kB' 'MemAvailable: 33692928 kB' 'Buffers: 2704 kB' 'Cached: 9406052 kB' 'SwapCached: 0 kB' 'Active: 6423340 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033800 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523000 kB' 'Mapped: 196348 kB' 'Shmem: 5513976 kB' 'KReclaimable: 167736 kB' 'Slab: 491216 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323480 kB' 'KernelStack: 12464 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7136240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.109 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.110 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30128640 kB' 'MemAvailable: 33695464 kB' 'Buffers: 2704 kB' 'Cached: 9406072 kB' 'SwapCached: 0 kB' 'Active: 6423124 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033584 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522732 kB' 'Mapped: 196304 kB' 'Shmem: 5513996 kB' 'KReclaimable: 167736 kB' 'Slab: 491188 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323452 kB' 'KernelStack: 12448 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7138488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.111 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.112 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.113 nr_hugepages=1024 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.113 resv_hugepages=0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.113 surplus_hugepages=0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.113 anon_hugepages=0 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30126880 kB' 'MemAvailable: 33693704 kB' 'Buffers: 2704 kB' 'Cached: 9406092 kB' 'SwapCached: 0 kB' 'Active: 6423708 kB' 'Inactive: 3505240 kB' 'Active(anon): 6034168 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523284 kB' 'Mapped: 196304 kB' 'Shmem: 5514016 kB' 'KReclaimable: 167736 kB' 'Slab: 491180 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323444 kB' 'KernelStack: 12752 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7138644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.113 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.114 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20195400 kB' 'MemUsed: 4376956 kB' 'SwapCached: 0 kB' 'Active: 1635908 kB' 'Inactive: 72492 kB' 'Active(anon): 1506640 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1370964 kB' 'Mapped: 68088 kB' 'AnonPages: 340556 kB' 'Shmem: 1169204 kB' 'KernelStack: 7544 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201168 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.115 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.116 node0=1024 expecting 1024 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.116 16:52:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.556 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.556 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.556 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.556 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.556 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.556 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.556 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.556 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.556 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.556 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.556 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.556 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.556 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.556 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.556 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.556 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.556 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.556 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.556 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30115280 kB' 'MemAvailable: 33682104 kB' 'Buffers: 2704 kB' 'Cached: 9406160 kB' 'SwapCached: 0 kB' 'Active: 6423108 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033568 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522244 kB' 'Mapped: 196352 kB' 'Shmem: 5514084 kB' 'KReclaimable: 167736 kB' 'Slab: 490936 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323200 kB' 'KernelStack: 12432 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7136336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.557 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30119616 kB' 'MemAvailable: 33686440 kB' 'Buffers: 2704 kB' 'Cached: 9406160 kB' 'SwapCached: 0 kB' 'Active: 6422816 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033276 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522372 kB' 'Mapped: 196292 kB' 'Shmem: 5514084 kB' 'KReclaimable: 167736 kB' 'Slab: 490920 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323184 kB' 'KernelStack: 12448 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7136352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.558 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.559 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30119748 kB' 'MemAvailable: 33686572 kB' 'Buffers: 2704 kB' 'Cached: 9406164 kB' 'SwapCached: 0 kB' 'Active: 6422948 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033408 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522496 kB' 'Mapped: 196292 kB' 'Shmem: 5514088 kB' 'KReclaimable: 167736 kB' 'Slab: 490984 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323248 kB' 'KernelStack: 12448 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7136376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.560 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.561 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.562 nr_hugepages=1024 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.562 resv_hugepages=0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.562 surplus_hugepages=0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.562 anon_hugepages=0 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30120056 kB' 'MemAvailable: 33686880 kB' 'Buffers: 2704 kB' 'Cached: 9406204 kB' 'SwapCached: 0 kB' 'Active: 6423044 kB' 'Inactive: 3505240 kB' 'Active(anon): 6033504 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522568 kB' 'Mapped: 196292 kB' 'Shmem: 5514128 kB' 'KReclaimable: 167736 kB' 'Slab: 490984 kB' 'SReclaimable: 167736 kB' 'SUnreclaim: 323248 kB' 'KernelStack: 12464 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7136396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1406556 kB' 'DirectMap2M: 12144640 kB' 'DirectMap1G: 38797312 kB' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.562 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.563 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20184528 kB' 'MemUsed: 4387828 kB' 'SwapCached: 0 kB' 'Active: 1635444 kB' 'Inactive: 72492 kB' 'Active(anon): 1506176 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1371072 kB' 'Mapped: 68088 kB' 'AnonPages: 340072 kB' 'Shmem: 1169312 kB' 'KernelStack: 7544 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51036 kB' 'Slab: 201256 kB' 'SReclaimable: 51036 kB' 'SUnreclaim: 150220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.564 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.565 node0=1024 expecting 1024 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.565 00:03:15.565 real 0m2.892s 00:03:15.565 user 0m1.220s 00:03:15.565 sys 0m1.623s 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.565 16:52:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.565 ************************************ 00:03:15.565 END TEST no_shrink_alloc 00:03:15.565 ************************************ 00:03:15.565 16:52:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:15.565 16:52:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:15.565 00:03:15.565 real 0m11.843s 00:03:15.565 user 0m4.586s 00:03:15.565 sys 0m6.264s 00:03:15.565 16:52:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.565 16:52:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.565 ************************************ 00:03:15.565 END TEST hugepages 00:03:15.565 ************************************ 00:03:15.897 16:52:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:15.897 16:52:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:15.897 16:52:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.897 16:52:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.897 16:52:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.897 ************************************ 00:03:15.897 START TEST driver 00:03:15.897 ************************************ 00:03:15.897 16:52:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:15.897 * Looking for test storage... 00:03:15.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.897 16:52:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:15.897 16:52:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.897 16:52:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.449 16:52:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:18.449 16:52:17 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.449 16:52:17 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.449 16:52:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:18.449 ************************************ 00:03:18.449 START TEST guess_driver 00:03:18.449 ************************************ 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:18.449 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:18.449 Looking for driver=vfio-pci 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.449 16:52:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.826 16:52:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.763 16:52:20 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.044 00:03:24.044 real 0m5.016s 00:03:24.044 user 0m1.181s 00:03:24.044 sys 0m1.906s 00:03:24.044 16:52:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.044 16:52:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.044 ************************************ 00:03:24.044 END TEST guess_driver 00:03:24.044 ************************************ 00:03:24.044 16:52:23 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:24.044 00:03:24.044 real 0m7.743s 00:03:24.044 user 0m1.796s 00:03:24.044 sys 0m2.969s 00:03:24.044 16:52:23 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.044 16:52:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.044 ************************************ 00:03:24.044 END TEST driver 00:03:24.044 ************************************ 00:03:24.044 16:52:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:24.044 16:52:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.044 16:52:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.044 16:52:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.044 16:52:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.044 ************************************ 00:03:24.044 START TEST devices 00:03:24.044 ************************************ 00:03:24.044 16:52:23 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.044 * Looking for test storage... 00:03:24.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.044 16:52:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:24.044 16:52:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:24.044 16:52:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.044 16:52:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.983 16:52:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:24.983 16:52:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:24.983 No valid GPT data, bailing 00:03:24.983 16:52:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:24.983 16:52:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:24.983 16:52:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:24.983 16:52:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:24.983 16:52:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:24.984 16:52:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:24.984 16:52:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.984 16:52:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.984 16:52:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:24.984 ************************************ 00:03:24.984 START TEST nvme_mount 00:03:24.984 ************************************ 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:24.984 16:52:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:26.359 Creating new GPT entries in memory. 00:03:26.359 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:26.359 other utilities. 00:03:26.359 16:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:26.359 16:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.359 16:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.359 16:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.359 16:52:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:27.296 Creating new GPT entries in memory. 00:03:27.296 The operation has completed successfully. 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 989070 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.296 16:52:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.673 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.674 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:28.675 16:52:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:28.675 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:28.675 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:28.935 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:28.935 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:28.935 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:28.935 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.935 16:52:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.869 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.870 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.128 16:52:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:31.504 16:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.504 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.504 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:31.504 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:31.504 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:31.504 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.505 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.505 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.505 16:52:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.505 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.505 00:03:31.505 real 0m6.445s 00:03:31.505 user 0m1.515s 00:03:31.505 sys 0m2.569s 00:03:31.505 16:52:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.505 16:52:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:31.505 ************************************ 00:03:31.505 END TEST nvme_mount 00:03:31.505 ************************************ 00:03:31.505 16:52:31 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:31.505 16:52:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:31.505 16:52:31 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.505 16:52:31 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.505 16:52:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:31.505 ************************************ 00:03:31.505 START TEST dm_mount 00:03:31.505 ************************************ 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.505 16:52:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:32.884 Creating new GPT entries in memory. 00:03:32.884 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.884 other utilities. 00:03:32.884 16:52:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.884 16:52:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.884 16:52:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.884 16:52:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.884 16:52:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.819 Creating new GPT entries in memory. 00:03:33.819 The operation has completed successfully. 00:03:33.819 16:52:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:33.819 16:52:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.819 16:52:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.819 16:52:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.819 16:52:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:34.754 The operation has completed successfully. 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 991470 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.754 16:52:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:36.130 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.131 16:52:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.066 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:37.067 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:37.325 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:37.325 00:03:37.325 real 0m5.748s 00:03:37.325 user 0m0.979s 00:03:37.325 sys 0m1.682s 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.325 16:52:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:37.325 ************************************ 00:03:37.325 END TEST dm_mount 00:03:37.325 ************************************ 00:03:37.325 16:52:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.325 16:52:36 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.583 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:37.583 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:37.583 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:37.583 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.583 16:52:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:37.583 00:03:37.583 real 0m14.150s 00:03:37.583 user 0m3.179s 00:03:37.583 sys 0m5.305s 00:03:37.583 16:52:37 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.583 16:52:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:37.583 ************************************ 00:03:37.583 END TEST devices 00:03:37.583 ************************************ 00:03:37.583 16:52:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.583 00:03:37.583 real 0m44.891s 00:03:37.583 user 0m13.074s 00:03:37.583 sys 0m20.248s 00:03:37.583 16:52:37 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.583 16:52:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.583 ************************************ 00:03:37.583 END TEST setup.sh 00:03:37.583 ************************************ 00:03:37.583 16:52:37 -- common/autotest_common.sh@1142 -- # return 0 00:03:37.583 16:52:37 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.958 Hugepages 00:03:38.958 node hugesize free / total 00:03:38.958 node0 1048576kB 0 / 0 00:03:38.958 node0 2048kB 2048 / 2048 00:03:38.958 node1 1048576kB 0 / 0 00:03:38.958 node1 2048kB 0 / 0 00:03:38.958 00:03:38.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.958 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:38.958 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:38.958 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:38.958 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:38.958 16:52:38 -- spdk/autotest.sh@130 -- # uname -s 00:03:38.958 16:52:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:38.958 16:52:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:38.958 16:52:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.336 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:40.336 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:40.336 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:41.274 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.533 16:52:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:42.471 16:52:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:42.471 16:52:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:42.471 16:52:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.471 16:52:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:42.471 16:52:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:42.471 16:52:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:42.471 16:52:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.471 16:52:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.471 16:52:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:42.471 16:52:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:42.471 16:52:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:42.471 16:52:42 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.845 Waiting for block devices as requested 00:03:43.845 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:03:43.845 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:43.845 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:44.105 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.105 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.105 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.395 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.395 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:44.395 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:44.395 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:44.682 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:44.682 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.682 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.682 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.941 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.941 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:44.941 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:44.941 16:52:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:44.941 16:52:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:03:44.941 16:52:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:44.941 16:52:44 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:03:45.199 16:52:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:45.199 16:52:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:03:45.199 16:52:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:45.199 16:52:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:45.199 16:52:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:45.199 16:52:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:45.199 16:52:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:45.199 16:52:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:45.199 16:52:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:45.199 16:52:44 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:45.199 16:52:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:45.199 16:52:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:45.199 16:52:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:45.200 16:52:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:45.200 16:52:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:45.200 16:52:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:45.200 16:52:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:45.200 16:52:44 -- common/autotest_common.sh@1557 -- # continue 00:03:45.200 16:52:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:45.200 16:52:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.200 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:03:45.200 16:52:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:45.200 16:52:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.200 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:03:45.200 16:52:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.576 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.576 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.576 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.515 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.515 16:52:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:47.515 16:52:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.515 16:52:47 -- common/autotest_common.sh@10 -- # set +x 00:03:47.515 16:52:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:47.515 16:52:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:47.515 16:52:47 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.515 16:52:47 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:47.515 16:52:47 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:47.515 16:52:47 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:47.515 16:52:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:47.515 16:52:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:47.515 16:52:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.515 16:52:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.515 16:52:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:47.515 16:52:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:47.515 16:52:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:47.515 16:52:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:47.515 16:52:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:03:47.515 16:52:47 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:47.515 16:52:47 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:47.515 16:52:47 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:47.515 16:52:47 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:03:47.515 16:52:47 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:03:47.515 16:52:47 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=996801 00:03:47.515 16:52:47 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.515 16:52:47 -- common/autotest_common.sh@1598 -- # waitforlisten 996801 00:03:47.515 16:52:47 -- common/autotest_common.sh@829 -- # '[' -z 996801 ']' 00:03:47.515 16:52:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.515 16:52:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:47.515 16:52:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.515 16:52:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:47.515 16:52:47 -- common/autotest_common.sh@10 -- # set +x 00:03:47.773 [2024-07-12 16:52:47.251062] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:03:47.773 [2024-07-12 16:52:47.251166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996801 ] 00:03:47.773 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.773 [2024-07-12 16:52:47.309586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.773 [2024-07-12 16:52:47.422036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.031 16:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:48.031 16:52:47 -- common/autotest_common.sh@862 -- # return 0 00:03:48.031 16:52:47 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:48.031 16:52:47 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:48.031 16:52:47 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:03:51.315 nvme0n1 00:03:51.315 16:52:50 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:51.315 [2024-07-12 16:52:50.965329] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:51.315 [2024-07-12 16:52:50.965376] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:51.315 request: 00:03:51.315 { 00:03:51.315 "nvme_ctrlr_name": "nvme0", 00:03:51.315 "password": "test", 00:03:51.315 "method": "bdev_nvme_opal_revert", 00:03:51.315 "req_id": 1 00:03:51.315 } 00:03:51.315 Got JSON-RPC error response 00:03:51.315 response: 00:03:51.315 { 00:03:51.315 "code": -32603, 00:03:51.315 "message": "Internal error" 00:03:51.315 } 00:03:51.315 16:52:50 -- common/autotest_common.sh@1604 -- # true 00:03:51.315 16:52:50 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:51.315 16:52:50 -- common/autotest_common.sh@1608 -- # killprocess 996801 00:03:51.315 16:52:50 -- common/autotest_common.sh@948 -- # '[' -z 996801 ']' 00:03:51.315 16:52:50 -- common/autotest_common.sh@952 -- # kill -0 996801 00:03:51.315 16:52:50 -- common/autotest_common.sh@953 -- # uname 00:03:51.315 16:52:50 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:51.315 16:52:50 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 996801 00:03:51.573 16:52:51 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:51.573 16:52:51 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:51.573 16:52:51 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 996801' 00:03:51.573 killing process with pid 996801 00:03:51.573 16:52:51 -- common/autotest_common.sh@967 -- # kill 996801 00:03:51.573 16:52:51 -- common/autotest_common.sh@972 -- # wait 996801 00:03:53.469 16:52:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:53.469 16:52:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:53.469 16:52:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.469 16:52:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.469 16:52:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:53.469 16:52:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.469 16:52:52 -- common/autotest_common.sh@10 -- # set +x 00:03:53.469 16:52:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:53.469 16:52:52 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.469 16:52:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.469 16:52:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.469 16:52:52 -- common/autotest_common.sh@10 -- # set +x 00:03:53.469 ************************************ 00:03:53.469 START TEST env 00:03:53.469 ************************************ 00:03:53.469 16:52:52 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.469 * Looking for test storage... 00:03:53.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:53.469 16:52:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.469 16:52:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.469 16:52:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.469 16:52:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.469 ************************************ 00:03:53.469 START TEST env_memory 00:03:53.469 ************************************ 00:03:53.469 16:52:52 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.469 00:03:53.469 00:03:53.469 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.469 http://cunit.sourceforge.net/ 00:03:53.469 00:03:53.469 00:03:53.469 Suite: memory 00:03:53.469 Test: alloc and free memory map ...[2024-07-12 16:52:52.941337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.469 passed 00:03:53.469 Test: mem map translation ...[2024-07-12 16:52:52.961129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.469 [2024-07-12 16:52:52.961150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.469 [2024-07-12 16:52:52.961205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.469 [2024-07-12 16:52:52.961218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.469 passed 00:03:53.469 Test: mem map registration ...[2024-07-12 16:52:53.002175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:53.469 [2024-07-12 16:52:53.002196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:53.469 passed 00:03:53.469 Test: mem map adjacent registrations ...passed 00:03:53.469 00:03:53.469 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.469 suites 1 1 n/a 0 0 00:03:53.469 tests 4 4 4 0 0 00:03:53.469 asserts 152 152 152 0 n/a 00:03:53.469 00:03:53.469 Elapsed time = 0.141 seconds 00:03:53.469 00:03:53.469 real 0m0.149s 00:03:53.469 user 0m0.141s 00:03:53.469 sys 0m0.007s 00:03:53.469 16:52:53 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.469 16:52:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.469 ************************************ 00:03:53.469 END TEST env_memory 00:03:53.469 ************************************ 00:03:53.469 16:52:53 env -- common/autotest_common.sh@1142 -- # return 0 00:03:53.469 16:52:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.469 16:52:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.469 16:52:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.469 16:52:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.469 ************************************ 00:03:53.469 START TEST env_vtophys 00:03:53.469 ************************************ 00:03:53.469 16:52:53 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.469 EAL: lib.eal log level changed from notice to debug 00:03:53.469 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.469 EAL: Detected lcore 1 as core 1 on socket 0 00:03:53.469 EAL: Detected lcore 2 as core 2 on socket 0 00:03:53.469 EAL: Detected lcore 3 as core 3 on socket 0 00:03:53.469 EAL: Detected lcore 4 as core 4 on socket 0 00:03:53.469 EAL: Detected lcore 5 as core 5 on socket 0 00:03:53.469 EAL: Detected lcore 6 as core 8 on socket 0 00:03:53.469 EAL: Detected lcore 7 as core 9 on socket 0 00:03:53.469 EAL: Detected lcore 8 as core 10 on socket 0 00:03:53.469 EAL: Detected lcore 9 as core 11 on socket 0 00:03:53.469 EAL: Detected lcore 10 as core 12 on socket 0 00:03:53.469 EAL: Detected lcore 11 as core 13 on socket 0 00:03:53.469 EAL: Detected lcore 12 as core 0 on socket 1 00:03:53.469 EAL: Detected lcore 13 as core 1 on socket 1 00:03:53.469 EAL: Detected lcore 14 as core 2 on socket 1 00:03:53.469 EAL: Detected lcore 15 as core 3 on socket 1 00:03:53.469 EAL: Detected lcore 16 as core 4 on socket 1 00:03:53.469 EAL: Detected lcore 17 as core 5 on socket 1 00:03:53.469 EAL: Detected lcore 18 as core 8 on socket 1 00:03:53.469 EAL: Detected lcore 19 as core 9 on socket 1 00:03:53.469 EAL: Detected lcore 20 as core 10 on socket 1 00:03:53.469 EAL: Detected lcore 21 as core 11 on socket 1 00:03:53.469 EAL: Detected lcore 22 as core 12 on socket 1 00:03:53.469 EAL: Detected lcore 23 as core 13 on socket 1 00:03:53.469 EAL: Detected lcore 24 as core 0 on socket 0 00:03:53.469 EAL: Detected lcore 25 as core 1 on socket 0 00:03:53.469 EAL: Detected lcore 26 as core 2 on socket 0 00:03:53.469 EAL: Detected lcore 27 as core 3 on socket 0 00:03:53.469 EAL: Detected lcore 28 as core 4 on socket 0 00:03:53.469 EAL: Detected lcore 29 as core 5 on socket 0 00:03:53.469 EAL: Detected lcore 30 as core 8 on socket 0 00:03:53.469 EAL: Detected lcore 31 as core 9 on socket 0 00:03:53.469 EAL: Detected lcore 32 as core 10 on socket 0 00:03:53.469 EAL: Detected lcore 33 as core 11 on socket 0 00:03:53.469 EAL: Detected lcore 34 as core 12 on socket 0 00:03:53.469 EAL: Detected lcore 35 as core 13 on socket 0 00:03:53.469 EAL: Detected lcore 36 as core 0 on socket 1 00:03:53.469 EAL: Detected lcore 37 as core 1 on socket 1 00:03:53.469 EAL: Detected lcore 38 as core 2 on socket 1 00:03:53.469 EAL: Detected lcore 39 as core 3 on socket 1 00:03:53.469 EAL: Detected lcore 40 as core 4 on socket 1 00:03:53.469 EAL: Detected lcore 41 as core 5 on socket 1 00:03:53.469 EAL: Detected lcore 42 as core 8 on socket 1 00:03:53.469 EAL: Detected lcore 43 as core 9 on socket 1 00:03:53.469 EAL: Detected lcore 44 as core 10 on socket 1 00:03:53.469 EAL: Detected lcore 45 as core 11 on socket 1 00:03:53.469 EAL: Detected lcore 46 as core 12 on socket 1 00:03:53.469 EAL: Detected lcore 47 as core 13 on socket 1 00:03:53.469 EAL: Maximum logical cores by configuration: 128 00:03:53.469 EAL: Detected CPU lcores: 48 00:03:53.469 EAL: Detected NUMA nodes: 2 00:03:53.469 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.470 EAL: Detected shared linkage of DPDK 00:03:53.470 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.470 EAL: Bus pci wants IOVA as 'DC' 00:03:53.470 EAL: Buses did not request a specific IOVA mode. 00:03:53.470 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:53.470 EAL: Selected IOVA mode 'VA' 00:03:53.470 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.470 EAL: Probing VFIO support... 00:03:53.470 EAL: IOMMU type 1 (Type 1) is supported 00:03:53.470 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:53.470 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:53.470 EAL: VFIO support initialized 00:03:53.470 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.470 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.470 EAL: Setting up physically contiguous memory... 00:03:53.470 EAL: Setting maximum number of open files to 524288 00:03:53.470 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.470 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:53.470 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.470 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:53.470 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.470 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:53.470 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.470 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.470 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:53.470 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:53.470 EAL: Hugepages will be freed exactly as allocated. 00:03:53.470 EAL: No shared files mode enabled, IPC is disabled 00:03:53.470 EAL: No shared files mode enabled, IPC is disabled 00:03:53.470 EAL: TSC frequency is ~2700000 KHz 00:03:53.470 EAL: Main lcore 0 is ready (tid=7feae8c13a00;cpuset=[0]) 00:03:53.470 EAL: Trying to obtain current memory policy. 00:03:53.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.470 EAL: Restoring previous memory policy: 0 00:03:53.470 EAL: request: mp_malloc_sync 00:03:53.470 EAL: No shared files mode enabled, IPC is disabled 00:03:53.470 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.470 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.727 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.727 00:03:53.727 00:03:53.727 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.727 http://cunit.sourceforge.net/ 00:03:53.727 00:03:53.727 00:03:53.727 Suite: components_suite 00:03:53.727 Test: vtophys_malloc_test ...passed 00:03:53.727 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.727 EAL: Trying to obtain current memory policy. 00:03:53.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.727 EAL: Restoring previous memory policy: 4 00:03:53.727 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.727 EAL: request: mp_malloc_sync 00:03:53.727 EAL: No shared files mode enabled, IPC is disabled 00:03:53.727 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.983 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.983 EAL: request: mp_malloc_sync 00:03:53.983 EAL: No shared files mode enabled, IPC is disabled 00:03:53.983 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.983 EAL: Trying to obtain current memory policy. 00:03:53.983 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.983 EAL: Restoring previous memory policy: 4 00:03:53.983 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.983 EAL: request: mp_malloc_sync 00:03:53.983 EAL: No shared files mode enabled, IPC is disabled 00:03:53.983 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.239 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.239 EAL: request: mp_malloc_sync 00:03:54.239 EAL: No shared files mode enabled, IPC is disabled 00:03:54.239 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.239 EAL: Trying to obtain current memory policy. 00:03:54.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.495 EAL: Restoring previous memory policy: 4 00:03:54.496 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.496 EAL: request: mp_malloc_sync 00:03:54.496 EAL: No shared files mode enabled, IPC is disabled 00:03:54.496 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.751 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.010 EAL: request: mp_malloc_sync 00:03:55.010 EAL: No shared files mode enabled, IPC is disabled 00:03:55.010 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.010 passed 00:03:55.010 00:03:55.010 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.010 suites 1 1 n/a 0 0 00:03:55.010 tests 2 2 2 0 0 00:03:55.010 asserts 497 497 497 0 n/a 00:03:55.010 00:03:55.010 Elapsed time = 1.321 seconds 00:03:55.010 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.010 EAL: request: mp_malloc_sync 00:03:55.010 EAL: No shared files mode enabled, IPC is disabled 00:03:55.010 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.010 EAL: No shared files mode enabled, IPC is disabled 00:03:55.010 EAL: No shared files mode enabled, IPC is disabled 00:03:55.010 EAL: No shared files mode enabled, IPC is disabled 00:03:55.010 00:03:55.010 real 0m1.437s 00:03:55.010 user 0m0.850s 00:03:55.010 sys 0m0.552s 00:03:55.010 16:52:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.010 16:52:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.010 ************************************ 00:03:55.010 END TEST env_vtophys 00:03:55.010 ************************************ 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.010 16:52:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.010 16:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.010 ************************************ 00:03:55.010 START TEST env_pci 00:03:55.010 ************************************ 00:03:55.010 16:52:54 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.010 00:03:55.010 00:03:55.010 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.010 http://cunit.sourceforge.net/ 00:03:55.010 00:03:55.010 00:03:55.010 Suite: pci 00:03:55.010 Test: pci_hook ...[2024-07-12 16:52:54.598053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 997695 has claimed it 00:03:55.010 EAL: Cannot find device (10000:00:01.0) 00:03:55.010 EAL: Failed to attach device on primary process 00:03:55.010 passed 00:03:55.010 00:03:55.010 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.010 suites 1 1 n/a 0 0 00:03:55.010 tests 1 1 1 0 0 00:03:55.010 asserts 25 25 25 0 n/a 00:03:55.010 00:03:55.010 Elapsed time = 0.021 seconds 00:03:55.010 00:03:55.010 real 0m0.032s 00:03:55.010 user 0m0.010s 00:03:55.010 sys 0m0.022s 00:03:55.010 16:52:54 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.010 16:52:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.010 ************************************ 00:03:55.010 END TEST env_pci 00:03:55.010 ************************************ 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.010 16:52:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.010 16:52:54 env -- env/env.sh@15 -- # uname 00:03:55.010 16:52:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.010 16:52:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.010 16:52:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:55.010 16:52:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.010 16:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.010 ************************************ 00:03:55.010 START TEST env_dpdk_post_init 00:03:55.010 ************************************ 00:03:55.010 16:52:54 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.010 EAL: Detected CPU lcores: 48 00:03:55.010 EAL: Detected NUMA nodes: 2 00:03:55.010 EAL: Detected shared linkage of DPDK 00:03:55.010 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.267 EAL: Selected IOVA mode 'VA' 00:03:55.267 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.267 EAL: VFIO support initialized 00:03:55.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.267 EAL: Using IOMMU type 1 (Type 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:55.267 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:56.201 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:03:59.493 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:03:59.493 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:03:59.493 Starting DPDK initialization... 00:03:59.493 Starting SPDK post initialization... 00:03:59.493 SPDK NVMe probe 00:03:59.493 Attaching to 0000:82:00.0 00:03:59.493 Attached to 0000:82:00.0 00:03:59.493 Cleaning up... 00:03:59.493 00:03:59.493 real 0m4.380s 00:03:59.493 user 0m3.270s 00:03:59.493 sys 0m0.170s 00:03:59.493 16:52:59 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.493 16:52:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.493 ************************************ 00:03:59.493 END TEST env_dpdk_post_init 00:03:59.493 ************************************ 00:03:59.493 16:52:59 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.493 16:52:59 env -- env/env.sh@26 -- # uname 00:03:59.493 16:52:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.493 16:52:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.493 16:52:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.493 16:52:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.493 16:52:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.493 ************************************ 00:03:59.493 START TEST env_mem_callbacks 00:03:59.493 ************************************ 00:03:59.493 16:52:59 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.493 EAL: Detected CPU lcores: 48 00:03:59.493 EAL: Detected NUMA nodes: 2 00:03:59.493 EAL: Detected shared linkage of DPDK 00:03:59.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.493 EAL: Selected IOVA mode 'VA' 00:03:59.493 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.493 EAL: VFIO support initialized 00:03:59.493 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.493 00:03:59.493 00:03:59.493 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.493 http://cunit.sourceforge.net/ 00:03:59.493 00:03:59.493 00:03:59.493 Suite: memory 00:03:59.493 Test: test ... 00:03:59.493 register 0x200000200000 2097152 00:03:59.493 malloc 3145728 00:03:59.493 register 0x200000400000 4194304 00:03:59.493 buf 0x200000500000 len 3145728 PASSED 00:03:59.493 malloc 64 00:03:59.493 buf 0x2000004fff40 len 64 PASSED 00:03:59.493 malloc 4194304 00:03:59.493 register 0x200000800000 6291456 00:03:59.493 buf 0x200000a00000 len 4194304 PASSED 00:03:59.493 free 0x200000500000 3145728 00:03:59.493 free 0x2000004fff40 64 00:03:59.493 unregister 0x200000400000 4194304 PASSED 00:03:59.493 free 0x200000a00000 4194304 00:03:59.493 unregister 0x200000800000 6291456 PASSED 00:03:59.493 malloc 8388608 00:03:59.493 register 0x200000400000 10485760 00:03:59.493 buf 0x200000600000 len 8388608 PASSED 00:03:59.493 free 0x200000600000 8388608 00:03:59.493 unregister 0x200000400000 10485760 PASSED 00:03:59.493 passed 00:03:59.493 00:03:59.493 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.493 suites 1 1 n/a 0 0 00:03:59.493 tests 1 1 1 0 0 00:03:59.493 asserts 15 15 15 0 n/a 00:03:59.493 00:03:59.493 Elapsed time = 0.004 seconds 00:03:59.493 00:03:59.493 real 0m0.047s 00:03:59.493 user 0m0.016s 00:03:59.493 sys 0m0.030s 00:03:59.493 16:52:59 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.493 16:52:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.493 ************************************ 00:03:59.493 END TEST env_mem_callbacks 00:03:59.493 ************************************ 00:03:59.493 16:52:59 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.493 00:03:59.493 real 0m6.326s 00:03:59.493 user 0m4.398s 00:03:59.493 sys 0m0.970s 00:03:59.493 16:52:59 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.493 16:52:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.493 ************************************ 00:03:59.493 END TEST env 00:03:59.493 ************************************ 00:03:59.493 16:52:59 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.493 16:52:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.493 16:52:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.493 16:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.493 16:52:59 -- common/autotest_common.sh@10 -- # set +x 00:03:59.752 ************************************ 00:03:59.752 START TEST rpc 00:03:59.752 ************************************ 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.752 * Looking for test storage... 00:03:59.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.752 16:52:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=998351 00:03:59.752 16:52:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:59.752 16:52:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.752 16:52:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 998351 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@829 -- # '[' -z 998351 ']' 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.752 16:52:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.752 [2024-07-12 16:52:59.313681] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:03:59.752 [2024-07-12 16:52:59.313792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998351 ] 00:03:59.752 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.752 [2024-07-12 16:52:59.370316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.010 [2024-07-12 16:52:59.484681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.010 [2024-07-12 16:52:59.484753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 998351' to capture a snapshot of events at runtime. 00:04:00.010 [2024-07-12 16:52:59.484769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.010 [2024-07-12 16:52:59.484796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.010 [2024-07-12 16:52:59.484807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid998351 for offline analysis/debug. 00:04:00.010 [2024-07-12 16:52:59.484835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.268 16:52:59 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.268 16:52:59 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:00.268 16:52:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.268 16:52:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.268 16:52:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.268 16:52:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.268 16:52:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.268 16:52:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.268 16:52:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 ************************************ 00:04:00.268 START TEST rpc_integrity 00:04:00.268 ************************************ 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.268 { 00:04:00.268 "name": "Malloc0", 00:04:00.268 "aliases": [ 00:04:00.268 "cfc4dbb3-5820-4886-8da8-774567198a34" 00:04:00.268 ], 00:04:00.268 "product_name": "Malloc disk", 00:04:00.268 "block_size": 512, 00:04:00.268 "num_blocks": 16384, 00:04:00.268 "uuid": "cfc4dbb3-5820-4886-8da8-774567198a34", 00:04:00.268 "assigned_rate_limits": { 00:04:00.268 "rw_ios_per_sec": 0, 00:04:00.268 "rw_mbytes_per_sec": 0, 00:04:00.268 "r_mbytes_per_sec": 0, 00:04:00.268 "w_mbytes_per_sec": 0 00:04:00.268 }, 00:04:00.268 "claimed": false, 00:04:00.268 "zoned": false, 00:04:00.268 "supported_io_types": { 00:04:00.268 "read": true, 00:04:00.268 "write": true, 00:04:00.268 "unmap": true, 00:04:00.268 "flush": true, 00:04:00.268 "reset": true, 00:04:00.268 "nvme_admin": false, 00:04:00.268 "nvme_io": false, 00:04:00.268 "nvme_io_md": false, 00:04:00.268 "write_zeroes": true, 00:04:00.268 "zcopy": true, 00:04:00.268 "get_zone_info": false, 00:04:00.268 "zone_management": false, 00:04:00.268 "zone_append": false, 00:04:00.268 "compare": false, 00:04:00.268 "compare_and_write": false, 00:04:00.268 "abort": true, 00:04:00.268 "seek_hole": false, 00:04:00.268 "seek_data": false, 00:04:00.268 "copy": true, 00:04:00.268 "nvme_iov_md": false 00:04:00.268 }, 00:04:00.268 "memory_domains": [ 00:04:00.268 { 00:04:00.268 "dma_device_id": "system", 00:04:00.268 "dma_device_type": 1 00:04:00.268 }, 00:04:00.268 { 00:04:00.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.268 "dma_device_type": 2 00:04:00.268 } 00:04:00.268 ], 00:04:00.268 "driver_specific": {} 00:04:00.268 } 00:04:00.268 ]' 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 [2024-07-12 16:52:59.851064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.268 [2024-07-12 16:52:59.851116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.268 [2024-07-12 16:52:59.851135] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11753e0 00:04:00.268 [2024-07-12 16:52:59.851147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.268 [2024-07-12 16:52:59.852332] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.268 [2024-07-12 16:52:59.852356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.268 Passthru0 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.268 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.268 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.268 { 00:04:00.268 "name": "Malloc0", 00:04:00.268 "aliases": [ 00:04:00.268 "cfc4dbb3-5820-4886-8da8-774567198a34" 00:04:00.268 ], 00:04:00.268 "product_name": "Malloc disk", 00:04:00.268 "block_size": 512, 00:04:00.268 "num_blocks": 16384, 00:04:00.268 "uuid": "cfc4dbb3-5820-4886-8da8-774567198a34", 00:04:00.268 "assigned_rate_limits": { 00:04:00.268 "rw_ios_per_sec": 0, 00:04:00.269 "rw_mbytes_per_sec": 0, 00:04:00.269 "r_mbytes_per_sec": 0, 00:04:00.269 "w_mbytes_per_sec": 0 00:04:00.269 }, 00:04:00.269 "claimed": true, 00:04:00.269 "claim_type": "exclusive_write", 00:04:00.269 "zoned": false, 00:04:00.269 "supported_io_types": { 00:04:00.269 "read": true, 00:04:00.269 "write": true, 00:04:00.269 "unmap": true, 00:04:00.269 "flush": true, 00:04:00.269 "reset": true, 00:04:00.269 "nvme_admin": false, 00:04:00.269 "nvme_io": false, 00:04:00.269 "nvme_io_md": false, 00:04:00.269 "write_zeroes": true, 00:04:00.269 "zcopy": true, 00:04:00.269 "get_zone_info": false, 00:04:00.269 "zone_management": false, 00:04:00.269 "zone_append": false, 00:04:00.269 "compare": false, 00:04:00.269 "compare_and_write": false, 00:04:00.269 "abort": true, 00:04:00.269 "seek_hole": false, 00:04:00.269 "seek_data": false, 00:04:00.269 "copy": true, 00:04:00.269 "nvme_iov_md": false 00:04:00.269 }, 00:04:00.269 "memory_domains": [ 00:04:00.269 { 00:04:00.269 "dma_device_id": "system", 00:04:00.269 "dma_device_type": 1 00:04:00.269 }, 00:04:00.269 { 00:04:00.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.269 "dma_device_type": 2 00:04:00.269 } 00:04:00.269 ], 00:04:00.269 "driver_specific": {} 00:04:00.269 }, 00:04:00.269 { 00:04:00.269 "name": "Passthru0", 00:04:00.269 "aliases": [ 00:04:00.269 "d1653f02-65f3-587a-a905-a83b790495fa" 00:04:00.269 ], 00:04:00.269 "product_name": "passthru", 00:04:00.269 "block_size": 512, 00:04:00.269 "num_blocks": 16384, 00:04:00.269 "uuid": "d1653f02-65f3-587a-a905-a83b790495fa", 00:04:00.269 "assigned_rate_limits": { 00:04:00.269 "rw_ios_per_sec": 0, 00:04:00.269 "rw_mbytes_per_sec": 0, 00:04:00.269 "r_mbytes_per_sec": 0, 00:04:00.269 "w_mbytes_per_sec": 0 00:04:00.269 }, 00:04:00.269 "claimed": false, 00:04:00.269 "zoned": false, 00:04:00.269 "supported_io_types": { 00:04:00.269 "read": true, 00:04:00.269 "write": true, 00:04:00.269 "unmap": true, 00:04:00.269 "flush": true, 00:04:00.269 "reset": true, 00:04:00.269 "nvme_admin": false, 00:04:00.269 "nvme_io": false, 00:04:00.269 "nvme_io_md": false, 00:04:00.269 "write_zeroes": true, 00:04:00.269 "zcopy": true, 00:04:00.269 "get_zone_info": false, 00:04:00.269 "zone_management": false, 00:04:00.269 "zone_append": false, 00:04:00.269 "compare": false, 00:04:00.269 "compare_and_write": false, 00:04:00.269 "abort": true, 00:04:00.269 "seek_hole": false, 00:04:00.269 "seek_data": false, 00:04:00.269 "copy": true, 00:04:00.269 "nvme_iov_md": false 00:04:00.269 }, 00:04:00.269 "memory_domains": [ 00:04:00.269 { 00:04:00.269 "dma_device_id": "system", 00:04:00.269 "dma_device_type": 1 00:04:00.269 }, 00:04:00.269 { 00:04:00.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.269 "dma_device_type": 2 00:04:00.269 } 00:04:00.269 ], 00:04:00.269 "driver_specific": { 00:04:00.269 "passthru": { 00:04:00.269 "name": "Passthru0", 00:04:00.269 "base_bdev_name": "Malloc0" 00:04:00.269 } 00:04:00.269 } 00:04:00.269 } 00:04:00.269 ]' 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.269 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.269 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.527 16:52:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.527 00:04:00.527 real 0m0.211s 00:04:00.527 user 0m0.139s 00:04:00.527 sys 0m0.018s 00:04:00.527 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.527 16:52:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.527 ************************************ 00:04:00.527 END TEST rpc_integrity 00:04:00.527 ************************************ 00:04:00.527 16:52:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:00.527 16:52:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.527 16:52:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.527 16:52:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.527 16:52:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.527 ************************************ 00:04:00.527 START TEST rpc_plugins 00:04:00.527 ************************************ 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:00.527 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.527 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.527 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.527 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.527 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.527 { 00:04:00.527 "name": "Malloc1", 00:04:00.527 "aliases": [ 00:04:00.527 "3d15cb0a-ef68-4aa7-83bd-ddfab444f183" 00:04:00.527 ], 00:04:00.527 "product_name": "Malloc disk", 00:04:00.527 "block_size": 4096, 00:04:00.527 "num_blocks": 256, 00:04:00.527 "uuid": "3d15cb0a-ef68-4aa7-83bd-ddfab444f183", 00:04:00.527 "assigned_rate_limits": { 00:04:00.527 "rw_ios_per_sec": 0, 00:04:00.527 "rw_mbytes_per_sec": 0, 00:04:00.527 "r_mbytes_per_sec": 0, 00:04:00.527 "w_mbytes_per_sec": 0 00:04:00.527 }, 00:04:00.527 "claimed": false, 00:04:00.527 "zoned": false, 00:04:00.527 "supported_io_types": { 00:04:00.527 "read": true, 00:04:00.527 "write": true, 00:04:00.527 "unmap": true, 00:04:00.527 "flush": true, 00:04:00.527 "reset": true, 00:04:00.527 "nvme_admin": false, 00:04:00.527 "nvme_io": false, 00:04:00.527 "nvme_io_md": false, 00:04:00.527 "write_zeroes": true, 00:04:00.527 "zcopy": true, 00:04:00.527 "get_zone_info": false, 00:04:00.527 "zone_management": false, 00:04:00.527 "zone_append": false, 00:04:00.527 "compare": false, 00:04:00.528 "compare_and_write": false, 00:04:00.528 "abort": true, 00:04:00.528 "seek_hole": false, 00:04:00.528 "seek_data": false, 00:04:00.528 "copy": true, 00:04:00.528 "nvme_iov_md": false 00:04:00.528 }, 00:04:00.528 "memory_domains": [ 00:04:00.528 { 00:04:00.528 "dma_device_id": "system", 00:04:00.528 "dma_device_type": 1 00:04:00.528 }, 00:04:00.528 { 00:04:00.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.528 "dma_device_type": 2 00:04:00.528 } 00:04:00.528 ], 00:04:00.528 "driver_specific": {} 00:04:00.528 } 00:04:00.528 ]' 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:00.528 16:53:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.528 00:04:00.528 real 0m0.111s 00:04:00.528 user 0m0.069s 00:04:00.528 sys 0m0.012s 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.528 16:53:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.528 ************************************ 00:04:00.528 END TEST rpc_plugins 00:04:00.528 ************************************ 00:04:00.528 16:53:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:00.528 16:53:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.528 16:53:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.528 16:53:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.528 16:53:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.528 ************************************ 00:04:00.528 START TEST rpc_trace_cmd_test 00:04:00.528 ************************************ 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:00.528 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid998351", 00:04:00.528 "tpoint_group_mask": "0x8", 00:04:00.528 "iscsi_conn": { 00:04:00.528 "mask": "0x2", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "scsi": { 00:04:00.528 "mask": "0x4", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "bdev": { 00:04:00.528 "mask": "0x8", 00:04:00.528 "tpoint_mask": "0xffffffffffffffff" 00:04:00.528 }, 00:04:00.528 "nvmf_rdma": { 00:04:00.528 "mask": "0x10", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "nvmf_tcp": { 00:04:00.528 "mask": "0x20", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "ftl": { 00:04:00.528 "mask": "0x40", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "blobfs": { 00:04:00.528 "mask": "0x80", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "dsa": { 00:04:00.528 "mask": "0x200", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "thread": { 00:04:00.528 "mask": "0x400", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "nvme_pcie": { 00:04:00.528 "mask": "0x800", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "iaa": { 00:04:00.528 "mask": "0x1000", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "nvme_tcp": { 00:04:00.528 "mask": "0x2000", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "bdev_nvme": { 00:04:00.528 "mask": "0x4000", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 }, 00:04:00.528 "sock": { 00:04:00.528 "mask": "0x8000", 00:04:00.528 "tpoint_mask": "0x0" 00:04:00.528 } 00:04:00.528 }' 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:00.528 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.786 00:04:00.786 real 0m0.200s 00:04:00.786 user 0m0.177s 00:04:00.786 sys 0m0.016s 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.786 16:53:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.786 ************************************ 00:04:00.786 END TEST rpc_trace_cmd_test 00:04:00.786 ************************************ 00:04:00.786 16:53:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:00.786 16:53:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.786 16:53:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.786 16:53:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.786 16:53:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.786 16:53:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.786 16:53:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.786 ************************************ 00:04:00.786 START TEST rpc_daemon_integrity 00:04:00.786 ************************************ 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:00.786 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.045 { 00:04:01.045 "name": "Malloc2", 00:04:01.045 "aliases": [ 00:04:01.045 "1574b856-d8a0-445c-9bc0-b64e23fd72ce" 00:04:01.045 ], 00:04:01.045 "product_name": "Malloc disk", 00:04:01.045 "block_size": 512, 00:04:01.045 "num_blocks": 16384, 00:04:01.045 "uuid": "1574b856-d8a0-445c-9bc0-b64e23fd72ce", 00:04:01.045 "assigned_rate_limits": { 00:04:01.045 "rw_ios_per_sec": 0, 00:04:01.045 "rw_mbytes_per_sec": 0, 00:04:01.045 "r_mbytes_per_sec": 0, 00:04:01.045 "w_mbytes_per_sec": 0 00:04:01.045 }, 00:04:01.045 "claimed": false, 00:04:01.045 "zoned": false, 00:04:01.045 "supported_io_types": { 00:04:01.045 "read": true, 00:04:01.045 "write": true, 00:04:01.045 "unmap": true, 00:04:01.045 "flush": true, 00:04:01.045 "reset": true, 00:04:01.045 "nvme_admin": false, 00:04:01.045 "nvme_io": false, 00:04:01.045 "nvme_io_md": false, 00:04:01.045 "write_zeroes": true, 00:04:01.045 "zcopy": true, 00:04:01.045 "get_zone_info": false, 00:04:01.045 "zone_management": false, 00:04:01.045 "zone_append": false, 00:04:01.045 "compare": false, 00:04:01.045 "compare_and_write": false, 00:04:01.045 "abort": true, 00:04:01.045 "seek_hole": false, 00:04:01.045 "seek_data": false, 00:04:01.045 "copy": true, 00:04:01.045 "nvme_iov_md": false 00:04:01.045 }, 00:04:01.045 "memory_domains": [ 00:04:01.045 { 00:04:01.045 "dma_device_id": "system", 00:04:01.045 "dma_device_type": 1 00:04:01.045 }, 00:04:01.045 { 00:04:01.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.045 "dma_device_type": 2 00:04:01.045 } 00:04:01.045 ], 00:04:01.045 "driver_specific": {} 00:04:01.045 } 00:04:01.045 ]' 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 [2024-07-12 16:53:00.524967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.045 [2024-07-12 16:53:00.525010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.045 [2024-07-12 16:53:00.525047] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12132f0 00:04:01.045 [2024-07-12 16:53:00.525060] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.045 [2024-07-12 16:53:00.526192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.045 [2024-07-12 16:53:00.526216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.045 Passthru0 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.045 { 00:04:01.045 "name": "Malloc2", 00:04:01.045 "aliases": [ 00:04:01.045 "1574b856-d8a0-445c-9bc0-b64e23fd72ce" 00:04:01.045 ], 00:04:01.045 "product_name": "Malloc disk", 00:04:01.045 "block_size": 512, 00:04:01.045 "num_blocks": 16384, 00:04:01.045 "uuid": "1574b856-d8a0-445c-9bc0-b64e23fd72ce", 00:04:01.045 "assigned_rate_limits": { 00:04:01.045 "rw_ios_per_sec": 0, 00:04:01.045 "rw_mbytes_per_sec": 0, 00:04:01.045 "r_mbytes_per_sec": 0, 00:04:01.045 "w_mbytes_per_sec": 0 00:04:01.045 }, 00:04:01.045 "claimed": true, 00:04:01.045 "claim_type": "exclusive_write", 00:04:01.045 "zoned": false, 00:04:01.045 "supported_io_types": { 00:04:01.045 "read": true, 00:04:01.045 "write": true, 00:04:01.045 "unmap": true, 00:04:01.045 "flush": true, 00:04:01.045 "reset": true, 00:04:01.045 "nvme_admin": false, 00:04:01.045 "nvme_io": false, 00:04:01.045 "nvme_io_md": false, 00:04:01.045 "write_zeroes": true, 00:04:01.045 "zcopy": true, 00:04:01.045 "get_zone_info": false, 00:04:01.045 "zone_management": false, 00:04:01.045 "zone_append": false, 00:04:01.045 "compare": false, 00:04:01.045 "compare_and_write": false, 00:04:01.045 "abort": true, 00:04:01.045 "seek_hole": false, 00:04:01.045 "seek_data": false, 00:04:01.045 "copy": true, 00:04:01.045 "nvme_iov_md": false 00:04:01.045 }, 00:04:01.045 "memory_domains": [ 00:04:01.045 { 00:04:01.045 "dma_device_id": "system", 00:04:01.045 "dma_device_type": 1 00:04:01.045 }, 00:04:01.045 { 00:04:01.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.045 "dma_device_type": 2 00:04:01.045 } 00:04:01.045 ], 00:04:01.045 "driver_specific": {} 00:04:01.045 }, 00:04:01.045 { 00:04:01.045 "name": "Passthru0", 00:04:01.045 "aliases": [ 00:04:01.045 "526eba46-c42f-5802-b40b-4cf456efbafa" 00:04:01.045 ], 00:04:01.045 "product_name": "passthru", 00:04:01.045 "block_size": 512, 00:04:01.045 "num_blocks": 16384, 00:04:01.045 "uuid": "526eba46-c42f-5802-b40b-4cf456efbafa", 00:04:01.045 "assigned_rate_limits": { 00:04:01.045 "rw_ios_per_sec": 0, 00:04:01.045 "rw_mbytes_per_sec": 0, 00:04:01.045 "r_mbytes_per_sec": 0, 00:04:01.045 "w_mbytes_per_sec": 0 00:04:01.045 }, 00:04:01.045 "claimed": false, 00:04:01.045 "zoned": false, 00:04:01.045 "supported_io_types": { 00:04:01.045 "read": true, 00:04:01.045 "write": true, 00:04:01.045 "unmap": true, 00:04:01.045 "flush": true, 00:04:01.045 "reset": true, 00:04:01.045 "nvme_admin": false, 00:04:01.045 "nvme_io": false, 00:04:01.045 "nvme_io_md": false, 00:04:01.045 "write_zeroes": true, 00:04:01.045 "zcopy": true, 00:04:01.045 "get_zone_info": false, 00:04:01.045 "zone_management": false, 00:04:01.045 "zone_append": false, 00:04:01.045 "compare": false, 00:04:01.045 "compare_and_write": false, 00:04:01.045 "abort": true, 00:04:01.045 "seek_hole": false, 00:04:01.045 "seek_data": false, 00:04:01.045 "copy": true, 00:04:01.045 "nvme_iov_md": false 00:04:01.045 }, 00:04:01.045 "memory_domains": [ 00:04:01.045 { 00:04:01.045 "dma_device_id": "system", 00:04:01.045 "dma_device_type": 1 00:04:01.045 }, 00:04:01.045 { 00:04:01.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.045 "dma_device_type": 2 00:04:01.045 } 00:04:01.045 ], 00:04:01.045 "driver_specific": { 00:04:01.045 "passthru": { 00:04:01.045 "name": "Passthru0", 00:04:01.045 "base_bdev_name": "Malloc2" 00:04:01.045 } 00:04:01.045 } 00:04:01.045 } 00:04:01.045 ]' 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.045 00:04:01.045 real 0m0.219s 00:04:01.045 user 0m0.142s 00:04:01.045 sys 0m0.020s 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.045 16:53:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.045 ************************************ 00:04:01.045 END TEST rpc_daemon_integrity 00:04:01.045 ************************************ 00:04:01.045 16:53:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.045 16:53:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.045 16:53:00 rpc -- rpc/rpc.sh@84 -- # killprocess 998351 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@948 -- # '[' -z 998351 ']' 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@952 -- # kill -0 998351 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@953 -- # uname 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998351 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998351' 00:04:01.046 killing process with pid 998351 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@967 -- # kill 998351 00:04:01.046 16:53:00 rpc -- common/autotest_common.sh@972 -- # wait 998351 00:04:01.612 00:04:01.612 real 0m1.911s 00:04:01.612 user 0m2.386s 00:04:01.612 sys 0m0.579s 00:04:01.612 16:53:01 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.612 16:53:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.612 ************************************ 00:04:01.612 END TEST rpc 00:04:01.612 ************************************ 00:04:01.612 16:53:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.612 16:53:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.612 16:53:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.612 16:53:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.612 16:53:01 -- common/autotest_common.sh@10 -- # set +x 00:04:01.612 ************************************ 00:04:01.612 START TEST skip_rpc 00:04:01.612 ************************************ 00:04:01.612 16:53:01 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.612 * Looking for test storage... 00:04:01.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.612 16:53:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.612 16:53:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.612 16:53:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.612 16:53:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.612 16:53:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.612 16:53:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.612 ************************************ 00:04:01.612 START TEST skip_rpc 00:04:01.612 ************************************ 00:04:01.612 16:53:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:01.612 16:53:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=998790 00:04:01.612 16:53:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.612 16:53:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.612 16:53:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.869 [2024-07-12 16:53:01.305702] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:01.869 [2024-07-12 16:53:01.305793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998790 ] 00:04:01.869 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.869 [2024-07-12 16:53:01.360049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.869 [2024-07-12 16:53:01.461709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 998790 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 998790 ']' 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 998790 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998790 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998790' 00:04:07.128 killing process with pid 998790 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 998790 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 998790 00:04:07.128 00:04:07.128 real 0m5.472s 00:04:07.128 user 0m5.176s 00:04:07.128 sys 0m0.305s 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.128 16:53:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.128 ************************************ 00:04:07.128 END TEST skip_rpc 00:04:07.128 ************************************ 00:04:07.128 16:53:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:07.128 16:53:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.128 16:53:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.128 16:53:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.128 16:53:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.128 ************************************ 00:04:07.128 START TEST skip_rpc_with_json 00:04:07.128 ************************************ 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=999493 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 999493 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 999493 ']' 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.128 16:53:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.387 [2024-07-12 16:53:06.829429] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:07.387 [2024-07-12 16:53:06.829493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999493 ] 00:04:07.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.387 [2024-07-12 16:53:06.884969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.387 [2024-07-12 16:53:06.988367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 [2024-07-12 16:53:07.233143] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.645 request: 00:04:07.645 { 00:04:07.645 "trtype": "tcp", 00:04:07.645 "method": "nvmf_get_transports", 00:04:07.645 "req_id": 1 00:04:07.645 } 00:04:07.645 Got JSON-RPC error response 00:04:07.645 response: 00:04:07.645 { 00:04:07.645 "code": -19, 00:04:07.645 "message": "No such device" 00:04:07.645 } 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 [2024-07-12 16:53:07.241247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.645 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.904 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.904 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.904 { 00:04:07.904 "subsystems": [ 00:04:07.904 { 00:04:07.904 "subsystem": "vfio_user_target", 00:04:07.904 "config": null 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "keyring", 00:04:07.904 "config": [] 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "iobuf", 00:04:07.904 "config": [ 00:04:07.904 { 00:04:07.904 "method": "iobuf_set_options", 00:04:07.904 "params": { 00:04:07.904 "small_pool_count": 8192, 00:04:07.904 "large_pool_count": 1024, 00:04:07.904 "small_bufsize": 8192, 00:04:07.904 "large_bufsize": 135168 00:04:07.904 } 00:04:07.904 } 00:04:07.904 ] 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "sock", 00:04:07.904 "config": [ 00:04:07.904 { 00:04:07.904 "method": "sock_set_default_impl", 00:04:07.904 "params": { 00:04:07.904 "impl_name": "posix" 00:04:07.904 } 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "method": "sock_impl_set_options", 00:04:07.904 "params": { 00:04:07.904 "impl_name": "ssl", 00:04:07.904 "recv_buf_size": 4096, 00:04:07.904 "send_buf_size": 4096, 00:04:07.904 "enable_recv_pipe": true, 00:04:07.904 "enable_quickack": false, 00:04:07.904 "enable_placement_id": 0, 00:04:07.904 "enable_zerocopy_send_server": true, 00:04:07.904 "enable_zerocopy_send_client": false, 00:04:07.904 "zerocopy_threshold": 0, 00:04:07.904 "tls_version": 0, 00:04:07.904 "enable_ktls": false 00:04:07.904 } 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "method": "sock_impl_set_options", 00:04:07.904 "params": { 00:04:07.904 "impl_name": "posix", 00:04:07.904 "recv_buf_size": 2097152, 00:04:07.904 "send_buf_size": 2097152, 00:04:07.904 "enable_recv_pipe": true, 00:04:07.904 "enable_quickack": false, 00:04:07.904 "enable_placement_id": 0, 00:04:07.904 "enable_zerocopy_send_server": true, 00:04:07.904 "enable_zerocopy_send_client": false, 00:04:07.904 "zerocopy_threshold": 0, 00:04:07.904 "tls_version": 0, 00:04:07.904 "enable_ktls": false 00:04:07.904 } 00:04:07.904 } 00:04:07.904 ] 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "vmd", 00:04:07.904 "config": [] 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "accel", 00:04:07.904 "config": [ 00:04:07.904 { 00:04:07.904 "method": "accel_set_options", 00:04:07.904 "params": { 00:04:07.904 "small_cache_size": 128, 00:04:07.904 "large_cache_size": 16, 00:04:07.904 "task_count": 2048, 00:04:07.904 "sequence_count": 2048, 00:04:07.904 "buf_count": 2048 00:04:07.904 } 00:04:07.904 } 00:04:07.904 ] 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "subsystem": "bdev", 00:04:07.904 "config": [ 00:04:07.904 { 00:04:07.904 "method": "bdev_set_options", 00:04:07.904 "params": { 00:04:07.904 "bdev_io_pool_size": 65535, 00:04:07.904 "bdev_io_cache_size": 256, 00:04:07.904 "bdev_auto_examine": true, 00:04:07.904 "iobuf_small_cache_size": 128, 00:04:07.904 "iobuf_large_cache_size": 16 00:04:07.904 } 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "method": "bdev_raid_set_options", 00:04:07.904 "params": { 00:04:07.904 "process_window_size_kb": 1024 00:04:07.904 } 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "method": "bdev_iscsi_set_options", 00:04:07.904 "params": { 00:04:07.904 "timeout_sec": 30 00:04:07.904 } 00:04:07.904 }, 00:04:07.904 { 00:04:07.904 "method": "bdev_nvme_set_options", 00:04:07.904 "params": { 00:04:07.904 "action_on_timeout": "none", 00:04:07.904 "timeout_us": 0, 00:04:07.904 "timeout_admin_us": 0, 00:04:07.904 "keep_alive_timeout_ms": 10000, 00:04:07.904 "arbitration_burst": 0, 00:04:07.904 "low_priority_weight": 0, 00:04:07.904 "medium_priority_weight": 0, 00:04:07.904 "high_priority_weight": 0, 00:04:07.904 "nvme_adminq_poll_period_us": 10000, 00:04:07.904 "nvme_ioq_poll_period_us": 0, 00:04:07.904 "io_queue_requests": 0, 00:04:07.904 "delay_cmd_submit": true, 00:04:07.904 "transport_retry_count": 4, 00:04:07.904 "bdev_retry_count": 3, 00:04:07.904 "transport_ack_timeout": 0, 00:04:07.904 "ctrlr_loss_timeout_sec": 0, 00:04:07.904 "reconnect_delay_sec": 0, 00:04:07.904 "fast_io_fail_timeout_sec": 0, 00:04:07.904 "disable_auto_failback": false, 00:04:07.904 "generate_uuids": false, 00:04:07.904 "transport_tos": 0, 00:04:07.904 "nvme_error_stat": false, 00:04:07.904 "rdma_srq_size": 0, 00:04:07.904 "io_path_stat": false, 00:04:07.904 "allow_accel_sequence": false, 00:04:07.904 "rdma_max_cq_size": 0, 00:04:07.904 "rdma_cm_event_timeout_ms": 0, 00:04:07.904 "dhchap_digests": [ 00:04:07.904 "sha256", 00:04:07.904 "sha384", 00:04:07.904 "sha512" 00:04:07.904 ], 00:04:07.904 "dhchap_dhgroups": [ 00:04:07.904 "null", 00:04:07.904 "ffdhe2048", 00:04:07.904 "ffdhe3072", 00:04:07.904 "ffdhe4096", 00:04:07.904 "ffdhe6144", 00:04:07.904 "ffdhe8192" 00:04:07.904 ] 00:04:07.905 } 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "method": "bdev_nvme_set_hotplug", 00:04:07.905 "params": { 00:04:07.905 "period_us": 100000, 00:04:07.905 "enable": false 00:04:07.905 } 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "method": "bdev_wait_for_examine" 00:04:07.905 } 00:04:07.905 ] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "scsi", 00:04:07.905 "config": null 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "scheduler", 00:04:07.905 "config": [ 00:04:07.905 { 00:04:07.905 "method": "framework_set_scheduler", 00:04:07.905 "params": { 00:04:07.905 "name": "static" 00:04:07.905 } 00:04:07.905 } 00:04:07.905 ] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "vhost_scsi", 00:04:07.905 "config": [] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "vhost_blk", 00:04:07.905 "config": [] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "ublk", 00:04:07.905 "config": [] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "nbd", 00:04:07.905 "config": [] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "nvmf", 00:04:07.905 "config": [ 00:04:07.905 { 00:04:07.905 "method": "nvmf_set_config", 00:04:07.905 "params": { 00:04:07.905 "discovery_filter": "match_any", 00:04:07.905 "admin_cmd_passthru": { 00:04:07.905 "identify_ctrlr": false 00:04:07.905 } 00:04:07.905 } 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "method": "nvmf_set_max_subsystems", 00:04:07.905 "params": { 00:04:07.905 "max_subsystems": 1024 00:04:07.905 } 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "method": "nvmf_set_crdt", 00:04:07.905 "params": { 00:04:07.905 "crdt1": 0, 00:04:07.905 "crdt2": 0, 00:04:07.905 "crdt3": 0 00:04:07.905 } 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "method": "nvmf_create_transport", 00:04:07.905 "params": { 00:04:07.905 "trtype": "TCP", 00:04:07.905 "max_queue_depth": 128, 00:04:07.905 "max_io_qpairs_per_ctrlr": 127, 00:04:07.905 "in_capsule_data_size": 4096, 00:04:07.905 "max_io_size": 131072, 00:04:07.905 "io_unit_size": 131072, 00:04:07.905 "max_aq_depth": 128, 00:04:07.905 "num_shared_buffers": 511, 00:04:07.905 "buf_cache_size": 4294967295, 00:04:07.905 "dif_insert_or_strip": false, 00:04:07.905 "zcopy": false, 00:04:07.905 "c2h_success": true, 00:04:07.905 "sock_priority": 0, 00:04:07.905 "abort_timeout_sec": 1, 00:04:07.905 "ack_timeout": 0, 00:04:07.905 "data_wr_pool_size": 0 00:04:07.905 } 00:04:07.905 } 00:04:07.905 ] 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "subsystem": "iscsi", 00:04:07.905 "config": [ 00:04:07.905 { 00:04:07.905 "method": "iscsi_set_options", 00:04:07.905 "params": { 00:04:07.905 "node_base": "iqn.2016-06.io.spdk", 00:04:07.905 "max_sessions": 128, 00:04:07.905 "max_connections_per_session": 2, 00:04:07.905 "max_queue_depth": 64, 00:04:07.905 "default_time2wait": 2, 00:04:07.905 "default_time2retain": 20, 00:04:07.905 "first_burst_length": 8192, 00:04:07.905 "immediate_data": true, 00:04:07.905 "allow_duplicated_isid": false, 00:04:07.905 "error_recovery_level": 0, 00:04:07.905 "nop_timeout": 60, 00:04:07.905 "nop_in_interval": 30, 00:04:07.905 "disable_chap": false, 00:04:07.905 "require_chap": false, 00:04:07.905 "mutual_chap": false, 00:04:07.905 "chap_group": 0, 00:04:07.905 "max_large_datain_per_connection": 64, 00:04:07.905 "max_r2t_per_connection": 4, 00:04:07.905 "pdu_pool_size": 36864, 00:04:07.905 "immediate_data_pool_size": 16384, 00:04:07.905 "data_out_pool_size": 2048 00:04:07.905 } 00:04:07.905 } 00:04:07.905 ] 00:04:07.905 } 00:04:07.905 ] 00:04:07.905 } 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 999493 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 999493 ']' 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 999493 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 999493 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 999493' 00:04:07.905 killing process with pid 999493 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 999493 00:04:07.905 16:53:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 999493 00:04:08.470 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=999631 00:04:08.470 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.470 16:53:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 999631 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 999631 ']' 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 999631 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 999631 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 999631' 00:04:13.726 killing process with pid 999631 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 999631 00:04:13.726 16:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 999631 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.726 00:04:13.726 real 0m6.553s 00:04:13.726 user 0m6.204s 00:04:13.726 sys 0m0.610s 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.726 ************************************ 00:04:13.726 END TEST skip_rpc_with_json 00:04:13.726 ************************************ 00:04:13.726 16:53:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.726 16:53:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:13.726 16:53:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.726 16:53:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.726 16:53:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.726 ************************************ 00:04:13.726 START TEST skip_rpc_with_delay 00:04:13.726 ************************************ 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.726 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.727 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.727 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.727 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.727 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.727 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.985 [2024-07-12 16:53:13.428754] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:13.985 [2024-07-12 16:53:13.428872] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:13.985 00:04:13.985 real 0m0.069s 00:04:13.985 user 0m0.041s 00:04:13.985 sys 0m0.027s 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.985 16:53:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:13.985 ************************************ 00:04:13.985 END TEST skip_rpc_with_delay 00:04:13.985 ************************************ 00:04:13.985 16:53:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.985 16:53:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:13.985 16:53:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:13.985 16:53:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:13.985 16:53:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.985 16:53:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.985 16:53:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.985 ************************************ 00:04:13.985 START TEST exit_on_failed_rpc_init 00:04:13.985 ************************************ 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1000347 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1000347 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1000347 ']' 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.986 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.986 [2024-07-12 16:53:13.544042] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:13.986 [2024-07-12 16:53:13.544145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000347 ] 00:04:13.986 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.986 [2024-07-12 16:53:13.602140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.243 [2024-07-12 16:53:13.709245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.501 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.502 16:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.502 [2024-07-12 16:53:13.998701] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:14.502 [2024-07-12 16:53:13.998829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000363 ] 00:04:14.502 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.502 [2024-07-12 16:53:14.057550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.502 [2024-07-12 16:53:14.165687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.502 [2024-07-12 16:53:14.165839] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.502 [2024-07-12 16:53:14.165863] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.502 [2024-07-12 16:53:14.165876] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1000347 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1000347 ']' 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1000347 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1000347 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1000347' 00:04:14.760 killing process with pid 1000347 00:04:14.760 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1000347 00:04:14.761 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1000347 00:04:15.349 00:04:15.349 real 0m1.273s 00:04:15.349 user 0m1.455s 00:04:15.349 sys 0m0.431s 00:04:15.349 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.349 16:53:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 ************************************ 00:04:15.349 END TEST exit_on_failed_rpc_init 00:04:15.349 ************************************ 00:04:15.349 16:53:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.349 16:53:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.349 00:04:15.349 real 0m13.621s 00:04:15.349 user 0m12.969s 00:04:15.349 sys 0m1.553s 00:04:15.349 16:53:14 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.349 16:53:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 ************************************ 00:04:15.349 END TEST skip_rpc 00:04:15.349 ************************************ 00:04:15.349 16:53:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.349 16:53:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.349 16:53:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.349 16:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.349 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 ************************************ 00:04:15.349 START TEST rpc_client 00:04:15.349 ************************************ 00:04:15.349 16:53:14 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.349 * Looking for test storage... 00:04:15.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:15.349 16:53:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:15.349 OK 00:04:15.349 16:53:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:15.349 00:04:15.349 real 0m0.066s 00:04:15.349 user 0m0.029s 00:04:15.349 sys 0m0.042s 00:04:15.349 16:53:14 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.349 16:53:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 ************************************ 00:04:15.349 END TEST rpc_client 00:04:15.349 ************************************ 00:04:15.349 16:53:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.349 16:53:14 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.349 16:53:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.349 16:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.349 16:53:14 -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 ************************************ 00:04:15.349 START TEST json_config 00:04:15.349 ************************************ 00:04:15.349 16:53:14 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:15.349 16:53:14 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.349 16:53:14 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.349 16:53:14 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.349 16:53:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.349 16:53:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.349 16:53:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.349 16:53:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:15.349 16:53:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@47 -- # : 0 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:15.349 16:53:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:15.349 INFO: JSON configuration test init 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:15.349 16:53:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.349 16:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:15.349 16:53:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.349 16:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.349 16:53:14 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:15.349 16:53:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:15.349 16:53:14 json_config -- json_config/common.sh@10 -- # shift 00:04:15.349 16:53:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:15.349 16:53:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:15.349 16:53:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:15.350 16:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.350 16:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.350 16:53:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1000610 00:04:15.350 16:53:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:15.350 16:53:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:15.350 Waiting for target to run... 00:04:15.350 16:53:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1000610 /var/tmp/spdk_tgt.sock 00:04:15.350 16:53:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 1000610 ']' 00:04:15.350 16:53:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:15.350 16:53:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.350 16:53:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:15.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:15.350 16:53:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.350 16:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.608 [2024-07-12 16:53:15.048950] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:15.608 [2024-07-12 16:53:15.049064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000610 ] 00:04:15.608 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.174 [2024-07-12 16:53:15.574350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.174 [2024-07-12 16:53:15.667214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:16.432 16:53:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.432 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:16.432 16:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:16.432 16:53:16 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:16.432 16:53:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:19.716 16:53:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.716 16:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:19.716 16:53:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:19.716 16:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:19.974 16:53:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.974 16:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:19.974 16:53:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:19.975 16:53:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.975 16:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:19.975 16:53:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.975 16:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.233 MallocForNvmf0 00:04:20.233 16:53:19 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.233 16:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.490 MallocForNvmf1 00:04:20.490 16:53:19 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.490 16:53:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.490 [2024-07-12 16:53:20.171330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.748 16:53:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.748 16:53:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.748 16:53:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.748 16:53:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.006 16:53:20 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.006 16:53:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.263 16:53:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.263 16:53:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.521 [2024-07-12 16:53:21.142468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.521 16:53:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:21.521 16:53:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.521 16:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.521 16:53:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:21.521 16:53:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.521 16:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.521 16:53:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:21.521 16:53:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.521 16:53:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.778 MallocBdevForConfigChangeCheck 00:04:21.778 16:53:21 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:21.778 16:53:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.779 16:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.779 16:53:21 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:21.779 16:53:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.342 16:53:21 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:22.342 INFO: shutting down applications... 00:04:22.342 16:53:21 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:22.342 16:53:21 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:22.342 16:53:21 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:22.342 16:53:21 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.233 Calling clear_iscsi_subsystem 00:04:24.233 Calling clear_nvmf_subsystem 00:04:24.233 Calling clear_nbd_subsystem 00:04:24.233 Calling clear_ublk_subsystem 00:04:24.233 Calling clear_vhost_blk_subsystem 00:04:24.233 Calling clear_vhost_scsi_subsystem 00:04:24.233 Calling clear_bdev_subsystem 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@345 -- # break 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:24.233 16:53:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:24.233 16:53:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:24.233 16:53:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.233 16:53:23 json_config -- json_config/common.sh@35 -- # [[ -n 1000610 ]] 00:04:24.233 16:53:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1000610 00:04:24.233 16:53:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.233 16:53:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.233 16:53:23 json_config -- json_config/common.sh@41 -- # kill -0 1000610 00:04:24.233 16:53:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.796 16:53:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.796 16:53:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.796 16:53:24 json_config -- json_config/common.sh@41 -- # kill -0 1000610 00:04:24.796 16:53:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.796 16:53:24 json_config -- json_config/common.sh@43 -- # break 00:04:24.796 16:53:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.796 16:53:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.796 SPDK target shutdown done 00:04:24.796 16:53:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:24.796 INFO: relaunching applications... 00:04:24.796 16:53:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.796 16:53:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.796 16:53:24 json_config -- json_config/common.sh@10 -- # shift 00:04:24.796 16:53:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.796 16:53:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.796 16:53:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.796 16:53:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.796 16:53:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.796 16:53:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1001918 00:04:24.796 16:53:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.796 Waiting for target to run... 00:04:24.796 16:53:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.796 16:53:24 json_config -- json_config/common.sh@25 -- # waitforlisten 1001918 /var/tmp/spdk_tgt.sock 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 1001918 ']' 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.796 16:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.797 [2024-07-12 16:53:24.453876] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:24.797 [2024-07-12 16:53:24.453968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001918 ] 00:04:24.797 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.361 [2024-07-12 16:53:24.953604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.361 [2024-07-12 16:53:25.044992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.640 [2024-07-12 16:53:28.077305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.640 [2024-07-12 16:53:28.109755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.204 16:53:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.204 16:53:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:29.204 16:53:28 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.204 00:04:29.204 16:53:28 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:29.204 16:53:28 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:29.204 INFO: Checking if target configuration is the same... 00:04:29.204 16:53:28 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.204 16:53:28 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:29.204 16:53:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.204 + '[' 2 -ne 2 ']' 00:04:29.204 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:29.204 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:29.204 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:29.204 +++ basename /dev/fd/62 00:04:29.204 ++ mktemp /tmp/62.XXX 00:04:29.204 + tmp_file_1=/tmp/62.V1o 00:04:29.204 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.204 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:29.204 + tmp_file_2=/tmp/spdk_tgt_config.json.zkn 00:04:29.204 + ret=0 00:04:29.205 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.769 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.769 + diff -u /tmp/62.V1o /tmp/spdk_tgt_config.json.zkn 00:04:29.769 + echo 'INFO: JSON config files are the same' 00:04:29.769 INFO: JSON config files are the same 00:04:29.769 + rm /tmp/62.V1o /tmp/spdk_tgt_config.json.zkn 00:04:29.769 + exit 0 00:04:29.769 16:53:29 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:29.769 16:53:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:29.769 INFO: changing configuration and checking if this can be detected... 00:04:29.769 16:53:29 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:29.769 16:53:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:30.026 16:53:29 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.026 16:53:29 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:30.026 16:53:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.026 + '[' 2 -ne 2 ']' 00:04:30.026 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:30.026 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:30.026 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.026 +++ basename /dev/fd/62 00:04:30.026 ++ mktemp /tmp/62.XXX 00:04:30.026 + tmp_file_1=/tmp/62.DXx 00:04:30.026 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.026 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.026 + tmp_file_2=/tmp/spdk_tgt_config.json.az6 00:04:30.026 + ret=0 00:04:30.026 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.282 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.282 + diff -u /tmp/62.DXx /tmp/spdk_tgt_config.json.az6 00:04:30.282 + ret=1 00:04:30.282 + echo '=== Start of file: /tmp/62.DXx ===' 00:04:30.282 + cat /tmp/62.DXx 00:04:30.282 + echo '=== End of file: /tmp/62.DXx ===' 00:04:30.282 + echo '' 00:04:30.282 + echo '=== Start of file: /tmp/spdk_tgt_config.json.az6 ===' 00:04:30.282 + cat /tmp/spdk_tgt_config.json.az6 00:04:30.282 + echo '=== End of file: /tmp/spdk_tgt_config.json.az6 ===' 00:04:30.282 + echo '' 00:04:30.282 + rm /tmp/62.DXx /tmp/spdk_tgt_config.json.az6 00:04:30.282 + exit 1 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:30.282 INFO: configuration change detected. 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:30.282 16:53:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.282 16:53:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@317 -- # [[ -n 1001918 ]] 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:30.282 16:53:29 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:30.283 16:53:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.283 16:53:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.283 16:53:29 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:30.283 16:53:29 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:30.540 16:53:29 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:30.540 16:53:29 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:30.540 16:53:29 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:30.540 16:53:29 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:30.540 16:53:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.540 16:53:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.540 16:53:29 json_config -- json_config/json_config.sh@323 -- # killprocess 1001918 00:04:30.540 16:53:29 json_config -- common/autotest_common.sh@948 -- # '[' -z 1001918 ']' 00:04:30.540 16:53:29 json_config -- common/autotest_common.sh@952 -- # kill -0 1001918 00:04:30.540 16:53:29 json_config -- common/autotest_common.sh@953 -- # uname 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1001918 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1001918' 00:04:30.540 killing process with pid 1001918 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@967 -- # kill 1001918 00:04:30.540 16:53:30 json_config -- common/autotest_common.sh@972 -- # wait 1001918 00:04:32.436 16:53:31 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.436 16:53:31 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:32.436 16:53:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.436 16:53:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.436 16:53:31 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:32.436 16:53:31 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:32.436 INFO: Success 00:04:32.436 00:04:32.436 real 0m16.750s 00:04:32.436 user 0m18.518s 00:04:32.436 sys 0m2.229s 00:04:32.436 16:53:31 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.436 16:53:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.436 ************************************ 00:04:32.436 END TEST json_config 00:04:32.436 ************************************ 00:04:32.436 16:53:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.436 16:53:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.436 16:53:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.436 16:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.436 16:53:31 -- common/autotest_common.sh@10 -- # set +x 00:04:32.436 ************************************ 00:04:32.436 START TEST json_config_extra_key 00:04:32.436 ************************************ 00:04:32.436 16:53:31 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.436 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:32.436 16:53:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.436 16:53:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.436 16:53:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.436 16:53:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.436 16:53:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:32.437 16:53:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.437 16:53:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.437 16:53:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.437 16:53:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.437 16:53:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.437 16:53:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.437 16:53:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.437 16:53:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.437 16:53:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.437 INFO: launching applications... 00:04:32.437 16:53:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1002838 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.437 Waiting for target to run... 00:04:32.437 16:53:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1002838 /var/tmp/spdk_tgt.sock 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1002838 ']' 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.437 16:53:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.437 [2024-07-12 16:53:31.847259] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:32.437 [2024-07-12 16:53:31.847342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002838 ] 00:04:32.437 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.695 [2024-07-12 16:53:32.175889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.695 [2024-07-12 16:53:32.254334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.261 16:53:32 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.261 16:53:32 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.261 00:04:33.261 16:53:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.261 INFO: shutting down applications... 00:04:33.261 16:53:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1002838 ]] 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1002838 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1002838 00:04:33.261 16:53:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1002838 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.825 16:53:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.825 SPDK target shutdown done 00:04:33.825 16:53:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.825 Success 00:04:33.825 00:04:33.825 real 0m1.550s 00:04:33.825 user 0m1.559s 00:04:33.825 sys 0m0.413s 00:04:33.825 16:53:33 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.825 16:53:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.825 ************************************ 00:04:33.825 END TEST json_config_extra_key 00:04:33.825 ************************************ 00:04:33.825 16:53:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.825 16:53:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.825 16:53:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.825 16:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.825 16:53:33 -- common/autotest_common.sh@10 -- # set +x 00:04:33.825 ************************************ 00:04:33.825 START TEST alias_rpc 00:04:33.826 ************************************ 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.826 * Looking for test storage... 00:04:33.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:33.826 16:53:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:33.826 16:53:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1003148 00:04:33.826 16:53:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.826 16:53:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1003148 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1003148 ']' 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.826 16:53:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.826 [2024-07-12 16:53:33.447356] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:33.826 [2024-07-12 16:53:33.447449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003148 ] 00:04:33.826 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.826 [2024-07-12 16:53:33.504079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.083 [2024-07-12 16:53:33.610388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.340 16:53:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.340 16:53:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:34.340 16:53:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:34.597 16:53:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1003148 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1003148 ']' 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1003148 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003148 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003148' 00:04:34.597 killing process with pid 1003148 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@967 -- # kill 1003148 00:04:34.597 16:53:34 alias_rpc -- common/autotest_common.sh@972 -- # wait 1003148 00:04:35.161 00:04:35.161 real 0m1.220s 00:04:35.161 user 0m1.313s 00:04:35.161 sys 0m0.386s 00:04:35.161 16:53:34 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.161 16:53:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.162 ************************************ 00:04:35.162 END TEST alias_rpc 00:04:35.162 ************************************ 00:04:35.162 16:53:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.162 16:53:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:35.162 16:53:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.162 16:53:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.162 16:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.162 16:53:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.162 ************************************ 00:04:35.162 START TEST spdkcli_tcp 00:04:35.162 ************************************ 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.162 * Looking for test storage... 00:04:35.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1003334 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.162 16:53:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1003334 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1003334 ']' 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.162 16:53:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.162 [2024-07-12 16:53:34.722361] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:35.162 [2024-07-12 16:53:34.722443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003334 ] 00:04:35.162 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.162 [2024-07-12 16:53:34.779084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.419 [2024-07-12 16:53:34.894760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.419 [2024-07-12 16:53:34.894771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.678 16:53:35 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.678 16:53:35 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:35.678 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1003343 00:04:35.678 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:35.678 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:35.936 [ 00:04:35.936 "bdev_malloc_delete", 00:04:35.936 "bdev_malloc_create", 00:04:35.936 "bdev_null_resize", 00:04:35.936 "bdev_null_delete", 00:04:35.936 "bdev_null_create", 00:04:35.936 "bdev_nvme_cuse_unregister", 00:04:35.936 "bdev_nvme_cuse_register", 00:04:35.936 "bdev_opal_new_user", 00:04:35.936 "bdev_opal_set_lock_state", 00:04:35.936 "bdev_opal_delete", 00:04:35.936 "bdev_opal_get_info", 00:04:35.937 "bdev_opal_create", 00:04:35.937 "bdev_nvme_opal_revert", 00:04:35.937 "bdev_nvme_opal_init", 00:04:35.937 "bdev_nvme_send_cmd", 00:04:35.937 "bdev_nvme_get_path_iostat", 00:04:35.937 "bdev_nvme_get_mdns_discovery_info", 00:04:35.937 "bdev_nvme_stop_mdns_discovery", 00:04:35.937 "bdev_nvme_start_mdns_discovery", 00:04:35.937 "bdev_nvme_set_multipath_policy", 00:04:35.937 "bdev_nvme_set_preferred_path", 00:04:35.937 "bdev_nvme_get_io_paths", 00:04:35.937 "bdev_nvme_remove_error_injection", 00:04:35.937 "bdev_nvme_add_error_injection", 00:04:35.937 "bdev_nvme_get_discovery_info", 00:04:35.937 "bdev_nvme_stop_discovery", 00:04:35.937 "bdev_nvme_start_discovery", 00:04:35.937 "bdev_nvme_get_controller_health_info", 00:04:35.937 "bdev_nvme_disable_controller", 00:04:35.937 "bdev_nvme_enable_controller", 00:04:35.937 "bdev_nvme_reset_controller", 00:04:35.937 "bdev_nvme_get_transport_statistics", 00:04:35.937 "bdev_nvme_apply_firmware", 00:04:35.937 "bdev_nvme_detach_controller", 00:04:35.937 "bdev_nvme_get_controllers", 00:04:35.937 "bdev_nvme_attach_controller", 00:04:35.937 "bdev_nvme_set_hotplug", 00:04:35.937 "bdev_nvme_set_options", 00:04:35.937 "bdev_passthru_delete", 00:04:35.937 "bdev_passthru_create", 00:04:35.937 "bdev_lvol_set_parent_bdev", 00:04:35.937 "bdev_lvol_set_parent", 00:04:35.937 "bdev_lvol_check_shallow_copy", 00:04:35.937 "bdev_lvol_start_shallow_copy", 00:04:35.937 "bdev_lvol_grow_lvstore", 00:04:35.937 "bdev_lvol_get_lvols", 00:04:35.937 "bdev_lvol_get_lvstores", 00:04:35.937 "bdev_lvol_delete", 00:04:35.937 "bdev_lvol_set_read_only", 00:04:35.937 "bdev_lvol_resize", 00:04:35.937 "bdev_lvol_decouple_parent", 00:04:35.937 "bdev_lvol_inflate", 00:04:35.937 "bdev_lvol_rename", 00:04:35.937 "bdev_lvol_clone_bdev", 00:04:35.937 "bdev_lvol_clone", 00:04:35.937 "bdev_lvol_snapshot", 00:04:35.937 "bdev_lvol_create", 00:04:35.937 "bdev_lvol_delete_lvstore", 00:04:35.937 "bdev_lvol_rename_lvstore", 00:04:35.937 "bdev_lvol_create_lvstore", 00:04:35.937 "bdev_raid_set_options", 00:04:35.937 "bdev_raid_remove_base_bdev", 00:04:35.937 "bdev_raid_add_base_bdev", 00:04:35.937 "bdev_raid_delete", 00:04:35.937 "bdev_raid_create", 00:04:35.937 "bdev_raid_get_bdevs", 00:04:35.937 "bdev_error_inject_error", 00:04:35.937 "bdev_error_delete", 00:04:35.937 "bdev_error_create", 00:04:35.937 "bdev_split_delete", 00:04:35.937 "bdev_split_create", 00:04:35.937 "bdev_delay_delete", 00:04:35.937 "bdev_delay_create", 00:04:35.937 "bdev_delay_update_latency", 00:04:35.937 "bdev_zone_block_delete", 00:04:35.937 "bdev_zone_block_create", 00:04:35.937 "blobfs_create", 00:04:35.937 "blobfs_detect", 00:04:35.937 "blobfs_set_cache_size", 00:04:35.937 "bdev_aio_delete", 00:04:35.937 "bdev_aio_rescan", 00:04:35.937 "bdev_aio_create", 00:04:35.937 "bdev_ftl_set_property", 00:04:35.937 "bdev_ftl_get_properties", 00:04:35.937 "bdev_ftl_get_stats", 00:04:35.937 "bdev_ftl_unmap", 00:04:35.937 "bdev_ftl_unload", 00:04:35.937 "bdev_ftl_delete", 00:04:35.937 "bdev_ftl_load", 00:04:35.937 "bdev_ftl_create", 00:04:35.937 "bdev_virtio_attach_controller", 00:04:35.937 "bdev_virtio_scsi_get_devices", 00:04:35.937 "bdev_virtio_detach_controller", 00:04:35.937 "bdev_virtio_blk_set_hotplug", 00:04:35.937 "bdev_iscsi_delete", 00:04:35.937 "bdev_iscsi_create", 00:04:35.937 "bdev_iscsi_set_options", 00:04:35.937 "accel_error_inject_error", 00:04:35.937 "ioat_scan_accel_module", 00:04:35.937 "dsa_scan_accel_module", 00:04:35.937 "iaa_scan_accel_module", 00:04:35.937 "vfu_virtio_create_scsi_endpoint", 00:04:35.937 "vfu_virtio_scsi_remove_target", 00:04:35.937 "vfu_virtio_scsi_add_target", 00:04:35.937 "vfu_virtio_create_blk_endpoint", 00:04:35.937 "vfu_virtio_delete_endpoint", 00:04:35.937 "keyring_file_remove_key", 00:04:35.937 "keyring_file_add_key", 00:04:35.937 "keyring_linux_set_options", 00:04:35.937 "iscsi_get_histogram", 00:04:35.937 "iscsi_enable_histogram", 00:04:35.937 "iscsi_set_options", 00:04:35.937 "iscsi_get_auth_groups", 00:04:35.937 "iscsi_auth_group_remove_secret", 00:04:35.937 "iscsi_auth_group_add_secret", 00:04:35.937 "iscsi_delete_auth_group", 00:04:35.937 "iscsi_create_auth_group", 00:04:35.937 "iscsi_set_discovery_auth", 00:04:35.937 "iscsi_get_options", 00:04:35.937 "iscsi_target_node_request_logout", 00:04:35.937 "iscsi_target_node_set_redirect", 00:04:35.937 "iscsi_target_node_set_auth", 00:04:35.937 "iscsi_target_node_add_lun", 00:04:35.937 "iscsi_get_stats", 00:04:35.937 "iscsi_get_connections", 00:04:35.937 "iscsi_portal_group_set_auth", 00:04:35.937 "iscsi_start_portal_group", 00:04:35.937 "iscsi_delete_portal_group", 00:04:35.937 "iscsi_create_portal_group", 00:04:35.937 "iscsi_get_portal_groups", 00:04:35.937 "iscsi_delete_target_node", 00:04:35.937 "iscsi_target_node_remove_pg_ig_maps", 00:04:35.937 "iscsi_target_node_add_pg_ig_maps", 00:04:35.937 "iscsi_create_target_node", 00:04:35.937 "iscsi_get_target_nodes", 00:04:35.937 "iscsi_delete_initiator_group", 00:04:35.937 "iscsi_initiator_group_remove_initiators", 00:04:35.937 "iscsi_initiator_group_add_initiators", 00:04:35.937 "iscsi_create_initiator_group", 00:04:35.937 "iscsi_get_initiator_groups", 00:04:35.937 "nvmf_set_crdt", 00:04:35.937 "nvmf_set_config", 00:04:35.937 "nvmf_set_max_subsystems", 00:04:35.937 "nvmf_stop_mdns_prr", 00:04:35.937 "nvmf_publish_mdns_prr", 00:04:35.937 "nvmf_subsystem_get_listeners", 00:04:35.937 "nvmf_subsystem_get_qpairs", 00:04:35.937 "nvmf_subsystem_get_controllers", 00:04:35.937 "nvmf_get_stats", 00:04:35.937 "nvmf_get_transports", 00:04:35.937 "nvmf_create_transport", 00:04:35.937 "nvmf_get_targets", 00:04:35.937 "nvmf_delete_target", 00:04:35.937 "nvmf_create_target", 00:04:35.937 "nvmf_subsystem_allow_any_host", 00:04:35.937 "nvmf_subsystem_remove_host", 00:04:35.937 "nvmf_subsystem_add_host", 00:04:35.937 "nvmf_ns_remove_host", 00:04:35.937 "nvmf_ns_add_host", 00:04:35.937 "nvmf_subsystem_remove_ns", 00:04:35.937 "nvmf_subsystem_add_ns", 00:04:35.937 "nvmf_subsystem_listener_set_ana_state", 00:04:35.937 "nvmf_discovery_get_referrals", 00:04:35.937 "nvmf_discovery_remove_referral", 00:04:35.937 "nvmf_discovery_add_referral", 00:04:35.937 "nvmf_subsystem_remove_listener", 00:04:35.937 "nvmf_subsystem_add_listener", 00:04:35.937 "nvmf_delete_subsystem", 00:04:35.937 "nvmf_create_subsystem", 00:04:35.937 "nvmf_get_subsystems", 00:04:35.937 "env_dpdk_get_mem_stats", 00:04:35.937 "nbd_get_disks", 00:04:35.937 "nbd_stop_disk", 00:04:35.937 "nbd_start_disk", 00:04:35.937 "ublk_recover_disk", 00:04:35.937 "ublk_get_disks", 00:04:35.937 "ublk_stop_disk", 00:04:35.937 "ublk_start_disk", 00:04:35.937 "ublk_destroy_target", 00:04:35.937 "ublk_create_target", 00:04:35.937 "virtio_blk_create_transport", 00:04:35.937 "virtio_blk_get_transports", 00:04:35.937 "vhost_controller_set_coalescing", 00:04:35.937 "vhost_get_controllers", 00:04:35.937 "vhost_delete_controller", 00:04:35.937 "vhost_create_blk_controller", 00:04:35.937 "vhost_scsi_controller_remove_target", 00:04:35.937 "vhost_scsi_controller_add_target", 00:04:35.937 "vhost_start_scsi_controller", 00:04:35.937 "vhost_create_scsi_controller", 00:04:35.937 "thread_set_cpumask", 00:04:35.937 "framework_get_governor", 00:04:35.937 "framework_get_scheduler", 00:04:35.937 "framework_set_scheduler", 00:04:35.937 "framework_get_reactors", 00:04:35.937 "thread_get_io_channels", 00:04:35.937 "thread_get_pollers", 00:04:35.937 "thread_get_stats", 00:04:35.937 "framework_monitor_context_switch", 00:04:35.937 "spdk_kill_instance", 00:04:35.937 "log_enable_timestamps", 00:04:35.937 "log_get_flags", 00:04:35.937 "log_clear_flag", 00:04:35.937 "log_set_flag", 00:04:35.937 "log_get_level", 00:04:35.937 "log_set_level", 00:04:35.937 "log_get_print_level", 00:04:35.937 "log_set_print_level", 00:04:35.937 "framework_enable_cpumask_locks", 00:04:35.937 "framework_disable_cpumask_locks", 00:04:35.937 "framework_wait_init", 00:04:35.937 "framework_start_init", 00:04:35.937 "scsi_get_devices", 00:04:35.937 "bdev_get_histogram", 00:04:35.937 "bdev_enable_histogram", 00:04:35.937 "bdev_set_qos_limit", 00:04:35.937 "bdev_set_qd_sampling_period", 00:04:35.937 "bdev_get_bdevs", 00:04:35.937 "bdev_reset_iostat", 00:04:35.937 "bdev_get_iostat", 00:04:35.937 "bdev_examine", 00:04:35.937 "bdev_wait_for_examine", 00:04:35.937 "bdev_set_options", 00:04:35.937 "notify_get_notifications", 00:04:35.937 "notify_get_types", 00:04:35.937 "accel_get_stats", 00:04:35.937 "accel_set_options", 00:04:35.937 "accel_set_driver", 00:04:35.937 "accel_crypto_key_destroy", 00:04:35.937 "accel_crypto_keys_get", 00:04:35.937 "accel_crypto_key_create", 00:04:35.937 "accel_assign_opc", 00:04:35.937 "accel_get_module_info", 00:04:35.937 "accel_get_opc_assignments", 00:04:35.937 "vmd_rescan", 00:04:35.937 "vmd_remove_device", 00:04:35.937 "vmd_enable", 00:04:35.937 "sock_get_default_impl", 00:04:35.937 "sock_set_default_impl", 00:04:35.937 "sock_impl_set_options", 00:04:35.937 "sock_impl_get_options", 00:04:35.937 "iobuf_get_stats", 00:04:35.937 "iobuf_set_options", 00:04:35.937 "keyring_get_keys", 00:04:35.937 "framework_get_pci_devices", 00:04:35.937 "framework_get_config", 00:04:35.937 "framework_get_subsystems", 00:04:35.937 "vfu_tgt_set_base_path", 00:04:35.937 "trace_get_info", 00:04:35.937 "trace_get_tpoint_group_mask", 00:04:35.937 "trace_disable_tpoint_group", 00:04:35.937 "trace_enable_tpoint_group", 00:04:35.937 "trace_clear_tpoint_mask", 00:04:35.937 "trace_set_tpoint_mask", 00:04:35.937 "spdk_get_version", 00:04:35.937 "rpc_get_methods" 00:04:35.937 ] 00:04:35.937 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:35.937 16:53:35 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.937 16:53:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.938 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:35.938 16:53:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1003334 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1003334 ']' 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1003334 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003334 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003334' 00:04:35.938 killing process with pid 1003334 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1003334 00:04:35.938 16:53:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1003334 00:04:36.196 00:04:36.196 real 0m1.275s 00:04:36.196 user 0m2.239s 00:04:36.196 sys 0m0.445s 00:04:36.196 16:53:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.196 16:53:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.196 ************************************ 00:04:36.196 END TEST spdkcli_tcp 00:04:36.196 ************************************ 00:04:36.454 16:53:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.454 16:53:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.454 16:53:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.454 16:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.454 16:53:35 -- common/autotest_common.sh@10 -- # set +x 00:04:36.454 ************************************ 00:04:36.454 START TEST dpdk_mem_utility 00:04:36.454 ************************************ 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.454 * Looking for test storage... 00:04:36.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:36.454 16:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.454 16:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1003540 00:04:36.454 16:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.454 16:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1003540 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1003540 ']' 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.454 16:53:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.454 [2024-07-12 16:53:36.039604] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:36.454 [2024-07-12 16:53:36.039686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003540 ] 00:04:36.454 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.454 [2024-07-12 16:53:36.096365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.712 [2024-07-12 16:53:36.203024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.970 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.970 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:36.970 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:36.970 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:36.970 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.970 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.970 { 00:04:36.971 "filename": "/tmp/spdk_mem_dump.txt" 00:04:36.971 } 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.971 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.971 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:36.971 1 heaps totaling size 814.000000 MiB 00:04:36.971 size: 814.000000 MiB heap id: 0 00:04:36.971 end heaps---------- 00:04:36.971 8 mempools totaling size 598.116089 MiB 00:04:36.971 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:36.971 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:36.971 size: 84.521057 MiB name: bdev_io_1003540 00:04:36.971 size: 51.011292 MiB name: evtpool_1003540 00:04:36.971 size: 50.003479 MiB name: msgpool_1003540 00:04:36.971 size: 21.763794 MiB name: PDU_Pool 00:04:36.971 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:36.971 size: 0.026123 MiB name: Session_Pool 00:04:36.971 end mempools------- 00:04:36.971 6 memzones totaling size 4.142822 MiB 00:04:36.971 size: 1.000366 MiB name: RG_ring_0_1003540 00:04:36.971 size: 1.000366 MiB name: RG_ring_1_1003540 00:04:36.971 size: 1.000366 MiB name: RG_ring_4_1003540 00:04:36.971 size: 1.000366 MiB name: RG_ring_5_1003540 00:04:36.971 size: 0.125366 MiB name: RG_ring_2_1003540 00:04:36.971 size: 0.015991 MiB name: RG_ring_3_1003540 00:04:36.971 end memzones------- 00:04:36.971 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:36.971 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:36.971 list of free elements. size: 12.519348 MiB 00:04:36.971 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:36.971 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:36.971 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:36.971 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:36.971 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:36.971 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:36.971 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:36.971 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:36.971 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:36.971 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:36.971 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:36.971 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:36.971 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:36.971 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:36.971 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:36.971 list of standard malloc elements. size: 199.218079 MiB 00:04:36.971 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:36.971 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:36.971 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:36.971 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:36.971 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:36.971 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:36.971 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:36.971 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:36.971 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:36.971 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:36.971 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:36.971 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:36.971 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:36.971 list of memzone associated elements. size: 602.262573 MiB 00:04:36.971 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:36.971 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:36.971 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:36.971 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:36.971 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:36.971 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1003540_0 00:04:36.971 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:36.971 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1003540_0 00:04:36.971 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:36.971 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1003540_0 00:04:36.971 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:36.971 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:36.971 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:36.971 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:36.971 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:36.971 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1003540 00:04:36.971 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:36.971 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1003540 00:04:36.971 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:36.971 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1003540 00:04:36.971 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:36.971 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:36.971 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:36.971 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:36.971 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:36.971 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:36.971 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:36.971 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:36.971 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:36.971 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1003540 00:04:36.971 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:36.971 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1003540 00:04:36.971 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:36.971 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1003540 00:04:36.971 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:36.971 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1003540 00:04:36.971 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:36.971 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1003540 00:04:36.971 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:36.971 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:36.971 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:36.971 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:36.971 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:36.971 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:36.971 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:36.971 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1003540 00:04:36.971 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:36.971 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:36.971 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:36.971 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:36.971 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:36.971 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1003540 00:04:36.971 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:36.971 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:36.971 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:36.971 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1003540 00:04:36.971 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:36.971 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1003540 00:04:36.971 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:36.971 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:36.971 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:36.971 16:53:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1003540 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1003540 ']' 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1003540 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003540 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.971 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003540' 00:04:36.971 killing process with pid 1003540 00:04:36.972 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1003540 00:04:36.972 16:53:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1003540 00:04:37.537 00:04:37.537 real 0m1.087s 00:04:37.537 user 0m1.056s 00:04:37.537 sys 0m0.401s 00:04:37.537 16:53:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.537 16:53:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.537 ************************************ 00:04:37.537 END TEST dpdk_mem_utility 00:04:37.537 ************************************ 00:04:37.537 16:53:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.537 16:53:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.537 16:53:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.537 16:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.537 16:53:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.537 ************************************ 00:04:37.537 START TEST event 00:04:37.537 ************************************ 00:04:37.537 16:53:37 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.537 * Looking for test storage... 00:04:37.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:37.537 16:53:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:37.537 16:53:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:37.538 16:53:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:37.538 16:53:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:37.538 16:53:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.538 16:53:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.538 ************************************ 00:04:37.538 START TEST event_perf 00:04:37.538 ************************************ 00:04:37.538 16:53:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:37.538 Running I/O for 1 seconds...[2024-07-12 16:53:37.156415] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:37.538 [2024-07-12 16:53:37.156483] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003728 ] 00:04:37.538 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.538 [2024-07-12 16:53:37.214098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.796 [2024-07-12 16:53:37.322702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.796 [2024-07-12 16:53:37.322793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.796 [2024-07-12 16:53:37.322796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.796 [2024-07-12 16:53:37.322768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.169 Running I/O for 1 seconds... 00:04:39.169 lcore 0: 234553 00:04:39.169 lcore 1: 234553 00:04:39.169 lcore 2: 234553 00:04:39.169 lcore 3: 234552 00:04:39.169 done. 00:04:39.169 00:04:39.169 real 0m1.291s 00:04:39.169 user 0m4.217s 00:04:39.169 sys 0m0.069s 00:04:39.169 16:53:38 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.169 16:53:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.169 ************************************ 00:04:39.169 END TEST event_perf 00:04:39.169 ************************************ 00:04:39.169 16:53:38 event -- common/autotest_common.sh@1142 -- # return 0 00:04:39.169 16:53:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.169 16:53:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:39.169 16:53:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.169 16:53:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.169 ************************************ 00:04:39.169 START TEST event_reactor 00:04:39.169 ************************************ 00:04:39.169 16:53:38 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.169 [2024-07-12 16:53:38.499253] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:39.169 [2024-07-12 16:53:38.499318] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003890 ] 00:04:39.169 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.169 [2024-07-12 16:53:38.558887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.169 [2024-07-12 16:53:38.662599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.103 test_start 00:04:40.103 oneshot 00:04:40.103 tick 100 00:04:40.103 tick 100 00:04:40.103 tick 250 00:04:40.103 tick 100 00:04:40.103 tick 100 00:04:40.103 tick 250 00:04:40.103 tick 100 00:04:40.103 tick 500 00:04:40.103 tick 100 00:04:40.103 tick 100 00:04:40.103 tick 250 00:04:40.103 tick 100 00:04:40.103 tick 100 00:04:40.103 test_end 00:04:40.103 00:04:40.103 real 0m1.288s 00:04:40.103 user 0m1.210s 00:04:40.103 sys 0m0.074s 00:04:40.103 16:53:39 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.103 16:53:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:40.103 ************************************ 00:04:40.103 END TEST event_reactor 00:04:40.103 ************************************ 00:04:40.103 16:53:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:40.363 16:53:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.363 16:53:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:40.363 16:53:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.363 16:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.363 ************************************ 00:04:40.363 START TEST event_reactor_perf 00:04:40.363 ************************************ 00:04:40.363 16:53:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.363 [2024-07-12 16:53:39.832133] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:40.363 [2024-07-12 16:53:39.832191] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004046 ] 00:04:40.363 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.363 [2024-07-12 16:53:39.891272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.363 [2024-07-12 16:53:39.992586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.802 test_start 00:04:41.802 test_end 00:04:41.802 Performance: 447147 events per second 00:04:41.802 00:04:41.802 real 0m1.284s 00:04:41.802 user 0m1.206s 00:04:41.802 sys 0m0.073s 00:04:41.802 16:53:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.802 16:53:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.802 ************************************ 00:04:41.802 END TEST event_reactor_perf 00:04:41.802 ************************************ 00:04:41.802 16:53:41 event -- common/autotest_common.sh@1142 -- # return 0 00:04:41.802 16:53:41 event -- event/event.sh@49 -- # uname -s 00:04:41.802 16:53:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:41.802 16:53:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.802 16:53:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.802 16:53:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.802 16:53:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.802 ************************************ 00:04:41.802 START TEST event_scheduler 00:04:41.802 ************************************ 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.802 * Looking for test storage... 00:04:41.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1004275 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1004275 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1004275 ']' 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.802 [2024-07-12 16:53:41.259842] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:41.802 [2024-07-12 16:53:41.259923] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004275 ] 00:04:41.802 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.802 [2024-07-12 16:53:41.319393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.802 [2024-07-12 16:53:41.429048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.802 [2024-07-12 16:53:41.429107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.802 [2024-07-12 16:53:41.429173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.802 [2024-07-12 16:53:41.429176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:41.802 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.802 16:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.802 [2024-07-12 16:53:41.473954] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:41.803 [2024-07-12 16:53:41.473984] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:41.803 [2024-07-12 16:53:41.474001] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:41.803 [2024-07-12 16:53:41.474012] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:41.803 [2024-07-12 16:53:41.474037] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:41.803 16:53:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.803 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:41.803 16:53:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.803 16:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.061 [2024-07-12 16:53:41.572410] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.061 16:53:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.061 16:53:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.061 16:53:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.061 16:53:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.061 16:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.061 ************************************ 00:04:42.061 START TEST scheduler_create_thread 00:04:42.061 ************************************ 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.061 2 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.061 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.061 3 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 4 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 5 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 6 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 7 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 8 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 9 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 10 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.062 16:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.628 16:53:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.628 00:04:42.628 real 0m0.589s 00:04:42.628 user 0m0.006s 00:04:42.628 sys 0m0.007s 00:04:42.628 16:53:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.628 16:53:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.628 ************************************ 00:04:42.628 END TEST scheduler_create_thread 00:04:42.628 ************************************ 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:42.628 16:53:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:42.628 16:53:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1004275 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1004275 ']' 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1004275 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1004275 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1004275' 00:04:42.628 killing process with pid 1004275 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1004275 00:04:42.628 16:53:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1004275 00:04:43.193 [2024-07-12 16:53:42.664514] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:43.451 00:04:43.451 real 0m1.748s 00:04:43.451 user 0m2.154s 00:04:43.451 sys 0m0.329s 00:04:43.451 16:53:42 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.451 16:53:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.451 ************************************ 00:04:43.451 END TEST event_scheduler 00:04:43.451 ************************************ 00:04:43.451 16:53:42 event -- common/autotest_common.sh@1142 -- # return 0 00:04:43.451 16:53:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:43.451 16:53:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:43.451 16:53:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.451 16:53:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.451 16:53:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.451 ************************************ 00:04:43.451 START TEST app_repeat 00:04:43.451 ************************************ 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1004542 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1004542' 00:04:43.451 Process app_repeat pid: 1004542 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:43.451 spdk_app_start Round 0 00:04:43.451 16:53:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1004542 /var/tmp/spdk-nbd.sock 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1004542 ']' 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.451 16:53:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.451 [2024-07-12 16:53:42.991931] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:04:43.452 [2024-07-12 16:53:42.991993] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004542 ] 00:04:43.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.452 [2024-07-12 16:53:43.050121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.709 [2024-07-12 16:53:43.152264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.710 [2024-07-12 16:53:43.152269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.710 16:53:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.710 16:53:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:43.710 16:53:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.968 Malloc0 00:04:43.968 16:53:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.226 Malloc1 00:04:44.226 16:53:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.226 16:53:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.484 /dev/nbd0 00:04:44.484 16:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.484 16:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.484 1+0 records in 00:04:44.484 1+0 records out 00:04:44.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214755 s, 19.1 MB/s 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:44.484 16:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:44.484 16:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.484 16:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.484 16:53:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.742 /dev/nbd1 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.742 1+0 records in 00:04:44.742 1+0 records out 00:04:44.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185904 s, 22.0 MB/s 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:44.742 16:53:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.742 16:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.000 16:53:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.001 { 00:04:45.001 "nbd_device": "/dev/nbd0", 00:04:45.001 "bdev_name": "Malloc0" 00:04:45.001 }, 00:04:45.001 { 00:04:45.001 "nbd_device": "/dev/nbd1", 00:04:45.001 "bdev_name": "Malloc1" 00:04:45.001 } 00:04:45.001 ]' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.001 { 00:04:45.001 "nbd_device": "/dev/nbd0", 00:04:45.001 "bdev_name": "Malloc0" 00:04:45.001 }, 00:04:45.001 { 00:04:45.001 "nbd_device": "/dev/nbd1", 00:04:45.001 "bdev_name": "Malloc1" 00:04:45.001 } 00:04:45.001 ]' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.001 /dev/nbd1' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.001 /dev/nbd1' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.001 256+0 records in 00:04:45.001 256+0 records out 00:04:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037061 s, 283 MB/s 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.001 256+0 records in 00:04:45.001 256+0 records out 00:04:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208504 s, 50.3 MB/s 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.001 256+0 records in 00:04:45.001 256+0 records out 00:04:45.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222673 s, 47.1 MB/s 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.001 16:53:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.259 16:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.517 16:53:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.775 16:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.033 16:53:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.033 16:53:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.291 16:53:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.549 [2024-07-12 16:53:46.079671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.549 [2024-07-12 16:53:46.178012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.549 [2024-07-12 16:53:46.178012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.549 [2024-07-12 16:53:46.235856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.549 [2024-07-12 16:53:46.235933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.828 16:53:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.828 16:53:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:49.828 spdk_app_start Round 1 00:04:49.828 16:53:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1004542 /var/tmp/spdk-nbd.sock 00:04:49.828 16:53:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1004542 ']' 00:04:49.828 16:53:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.828 16:53:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.829 16:53:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.829 16:53:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.829 16:53:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.829 16:53:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.829 16:53:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:49.829 16:53:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.829 Malloc0 00:04:49.829 16:53:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.087 Malloc1 00:04:50.087 16:53:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.087 16:53:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.345 /dev/nbd0 00:04:50.345 16:53:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.345 16:53:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.345 1+0 records in 00:04:50.345 1+0 records out 00:04:50.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185206 s, 22.1 MB/s 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:50.345 16:53:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:50.345 16:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.345 16:53:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.345 16:53:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.603 /dev/nbd1 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.603 1+0 records in 00:04:50.603 1+0 records out 00:04:50.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230306 s, 17.8 MB/s 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:50.603 16:53:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.603 16:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.861 { 00:04:50.861 "nbd_device": "/dev/nbd0", 00:04:50.861 "bdev_name": "Malloc0" 00:04:50.861 }, 00:04:50.861 { 00:04:50.861 "nbd_device": "/dev/nbd1", 00:04:50.861 "bdev_name": "Malloc1" 00:04:50.861 } 00:04:50.861 ]' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.861 { 00:04:50.861 "nbd_device": "/dev/nbd0", 00:04:50.861 "bdev_name": "Malloc0" 00:04:50.861 }, 00:04:50.861 { 00:04:50.861 "nbd_device": "/dev/nbd1", 00:04:50.861 "bdev_name": "Malloc1" 00:04:50.861 } 00:04:50.861 ]' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.861 /dev/nbd1' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.861 /dev/nbd1' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.861 256+0 records in 00:04:50.861 256+0 records out 00:04:50.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503677 s, 208 MB/s 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.861 256+0 records in 00:04:50.861 256+0 records out 00:04:50.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204393 s, 51.3 MB/s 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.861 256+0 records in 00:04:50.861 256+0 records out 00:04:50.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256471 s, 40.9 MB/s 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.861 16:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.119 16:53:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.377 16:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.634 16:53:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.634 16:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.634 16:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.892 16:53:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.892 16:53:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.150 16:53:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.407 [2024-07-12 16:53:51.856060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.407 [2024-07-12 16:53:51.959091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.407 [2024-07-12 16:53:51.959094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.407 [2024-07-12 16:53:52.017596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.407 [2024-07-12 16:53:52.017671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.928 16:53:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.928 16:53:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:54.928 spdk_app_start Round 2 00:04:54.928 16:53:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1004542 /var/tmp/spdk-nbd.sock 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1004542 ']' 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.928 16:53:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.184 16:53:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.184 16:53:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:55.184 16:53:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.441 Malloc0 00:04:55.441 16:53:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.699 Malloc1 00:04:55.699 16:53:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.699 16:53:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.956 /dev/nbd0 00:04:55.956 16:53:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.956 16:53:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:55.956 16:53:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.957 1+0 records in 00:04:55.957 1+0 records out 00:04:55.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185067 s, 22.1 MB/s 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:55.957 16:53:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:55.957 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.957 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.957 16:53:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.215 /dev/nbd1 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.473 1+0 records in 00:04:56.473 1+0 records out 00:04:56.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174424 s, 23.5 MB/s 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:56.473 16:53:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.473 16:53:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.731 { 00:04:56.731 "nbd_device": "/dev/nbd0", 00:04:56.731 "bdev_name": "Malloc0" 00:04:56.731 }, 00:04:56.731 { 00:04:56.731 "nbd_device": "/dev/nbd1", 00:04:56.731 "bdev_name": "Malloc1" 00:04:56.731 } 00:04:56.731 ]' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.731 { 00:04:56.731 "nbd_device": "/dev/nbd0", 00:04:56.731 "bdev_name": "Malloc0" 00:04:56.731 }, 00:04:56.731 { 00:04:56.731 "nbd_device": "/dev/nbd1", 00:04:56.731 "bdev_name": "Malloc1" 00:04:56.731 } 00:04:56.731 ]' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.731 /dev/nbd1' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.731 /dev/nbd1' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.731 16:53:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.732 256+0 records in 00:04:56.732 256+0 records out 00:04:56.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498773 s, 210 MB/s 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.732 256+0 records in 00:04:56.732 256+0 records out 00:04:56.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228006 s, 46.0 MB/s 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.732 256+0 records in 00:04:56.732 256+0 records out 00:04:56.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023441 s, 44.7 MB/s 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.732 16:53:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.989 16:53:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.247 16:53:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.504 16:53:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.504 16:53:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.762 16:53:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.019 [2024-07-12 16:53:57.666306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.278 [2024-07-12 16:53:57.769720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.278 [2024-07-12 16:53:57.769720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.278 [2024-07-12 16:53:57.828209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.278 [2024-07-12 16:53:57.828283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.805 16:54:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1004542 /var/tmp/spdk-nbd.sock 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1004542 ']' 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.805 16:54:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:01.063 16:54:00 event.app_repeat -- event/event.sh@39 -- # killprocess 1004542 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1004542 ']' 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1004542 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1004542 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1004542' 00:05:01.063 killing process with pid 1004542 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1004542 00:05:01.063 16:54:00 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1004542 00:05:01.322 spdk_app_start is called in Round 0. 00:05:01.322 Shutdown signal received, stop current app iteration 00:05:01.322 Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 reinitialization... 00:05:01.322 spdk_app_start is called in Round 1. 00:05:01.322 Shutdown signal received, stop current app iteration 00:05:01.322 Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 reinitialization... 00:05:01.322 spdk_app_start is called in Round 2. 00:05:01.322 Shutdown signal received, stop current app iteration 00:05:01.322 Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 reinitialization... 00:05:01.322 spdk_app_start is called in Round 3. 00:05:01.322 Shutdown signal received, stop current app iteration 00:05:01.322 16:54:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:01.322 16:54:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:01.322 00:05:01.322 real 0m17.959s 00:05:01.322 user 0m39.022s 00:05:01.322 sys 0m3.189s 00:05:01.322 16:54:00 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.322 16:54:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.322 ************************************ 00:05:01.322 END TEST app_repeat 00:05:01.322 ************************************ 00:05:01.322 16:54:00 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.322 16:54:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:01.322 16:54:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.322 16:54:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.322 16:54:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.322 16:54:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.322 ************************************ 00:05:01.322 START TEST cpu_locks 00:05:01.322 ************************************ 00:05:01.322 16:54:00 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.580 * Looking for test storage... 00:05:01.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:01.581 16:54:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:01.581 16:54:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:01.581 16:54:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:01.581 16:54:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:01.581 16:54:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.581 16:54:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.581 16:54:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.581 ************************************ 00:05:01.581 START TEST default_locks 00:05:01.581 ************************************ 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1006955 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1006955 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1006955 ']' 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.581 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.581 [2024-07-12 16:54:01.101488] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:01.581 [2024-07-12 16:54:01.101574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006955 ] 00:05:01.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.581 [2024-07-12 16:54:01.162225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.581 [2024-07-12 16:54:01.269195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.839 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.839 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:01.839 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1006955 00:05:01.839 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1006955 00:05:01.839 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.095 lslocks: write error 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1006955 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1006955 ']' 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1006955 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.095 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1006955 00:05:02.351 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.351 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.351 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1006955' 00:05:02.351 killing process with pid 1006955 00:05:02.351 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1006955 00:05:02.351 16:54:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1006955 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1006955 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1006955 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1006955 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1006955 ']' 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1006955) - No such process 00:05:02.610 ERROR: process (pid: 1006955) is no longer running 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.610 00:05:02.610 real 0m1.189s 00:05:02.610 user 0m1.109s 00:05:02.610 sys 0m0.513s 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.610 16:54:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.610 ************************************ 00:05:02.610 END TEST default_locks 00:05:02.610 ************************************ 00:05:02.610 16:54:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:02.610 16:54:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:02.610 16:54:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.610 16:54:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.610 16:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.610 ************************************ 00:05:02.610 START TEST default_locks_via_rpc 00:05:02.610 ************************************ 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1007169 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1007169 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1007169 ']' 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.610 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.868 [2024-07-12 16:54:02.340650] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:02.868 [2024-07-12 16:54:02.340748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007169 ] 00:05:02.868 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.868 [2024-07-12 16:54:02.398305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.869 [2024-07-12 16:54:02.506795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1007169 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1007169 00:05:03.126 16:54:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1007169 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1007169 ']' 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1007169 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.382 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007169 00:05:03.639 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.639 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.639 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007169' 00:05:03.639 killing process with pid 1007169 00:05:03.639 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1007169 00:05:03.639 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1007169 00:05:03.896 00:05:03.896 real 0m1.247s 00:05:03.896 user 0m1.191s 00:05:03.896 sys 0m0.491s 00:05:03.896 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.896 16:54:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.896 ************************************ 00:05:03.896 END TEST default_locks_via_rpc 00:05:03.896 ************************************ 00:05:03.896 16:54:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:03.896 16:54:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.896 16:54:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.896 16:54:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.896 16:54:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.896 ************************************ 00:05:03.896 START TEST non_locking_app_on_locked_coremask 00:05:03.896 ************************************ 00:05:03.896 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:03.896 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1007354 00:05:03.896 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.896 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1007354 /var/tmp/spdk.sock 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1007354 ']' 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.154 16:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.154 [2024-07-12 16:54:03.642344] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:04.154 [2024-07-12 16:54:03.642429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007354 ] 00:05:04.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.154 [2024-07-12 16:54:03.703387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.154 [2024-07-12 16:54:03.807313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.411 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.411 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:04.411 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1007464 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1007464 /var/tmp/spdk2.sock 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1007464 ']' 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.412 16:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.412 [2024-07-12 16:54:04.091636] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:04.412 [2024-07-12 16:54:04.091727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007464 ] 00:05:04.669 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.669 [2024-07-12 16:54:04.175105] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.669 [2024-07-12 16:54:04.175131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.926 [2024-07-12 16:54:04.391685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.501 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.501 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:05.501 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1007354 00:05:05.501 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1007354 00:05:05.501 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.118 lslocks: write error 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1007354 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1007354 ']' 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1007354 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007354 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007354' 00:05:06.118 killing process with pid 1007354 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1007354 00:05:06.118 16:54:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1007354 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1007464 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1007464 ']' 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1007464 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007464 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007464' 00:05:07.052 killing process with pid 1007464 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1007464 00:05:07.052 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1007464 00:05:07.309 00:05:07.309 real 0m3.391s 00:05:07.309 user 0m3.548s 00:05:07.309 sys 0m1.043s 00:05:07.309 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.309 16:54:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.309 ************************************ 00:05:07.309 END TEST non_locking_app_on_locked_coremask 00:05:07.309 ************************************ 00:05:07.309 16:54:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.309 16:54:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:07.309 16:54:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.309 16:54:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.309 16:54:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.567 ************************************ 00:05:07.567 START TEST locking_app_on_unlocked_coremask 00:05:07.567 ************************************ 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1007823 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1007823 /var/tmp/spdk.sock 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1007823 ']' 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.567 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.567 [2024-07-12 16:54:07.082544] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:07.567 [2024-07-12 16:54:07.082634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007823 ] 00:05:07.567 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.567 [2024-07-12 16:54:07.141701] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.567 [2024-07-12 16:54:07.141761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.567 [2024-07-12 16:54:07.245882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1007994 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1007994 /var/tmp/spdk2.sock 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1007994 ']' 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.826 16:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.084 [2024-07-12 16:54:07.541553] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:08.084 [2024-07-12 16:54:07.541634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007994 ] 00:05:08.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.084 [2024-07-12 16:54:07.633098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.342 [2024-07-12 16:54:07.847334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.907 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.907 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.907 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1007994 00:05:08.907 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1007994 00:05:08.907 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.473 lslocks: write error 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1007823 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1007823 ']' 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1007823 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007823 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007823' 00:05:09.473 killing process with pid 1007823 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1007823 00:05:09.473 16:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1007823 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1007994 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1007994 ']' 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1007994 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007994 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007994' 00:05:10.406 killing process with pid 1007994 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1007994 00:05:10.406 16:54:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1007994 00:05:10.663 00:05:10.663 real 0m3.186s 00:05:10.663 user 0m3.380s 00:05:10.663 sys 0m1.012s 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 ************************************ 00:05:10.663 END TEST locking_app_on_unlocked_coremask 00:05:10.663 ************************************ 00:05:10.663 16:54:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.663 16:54:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.663 16:54:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.663 16:54:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.663 16:54:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 ************************************ 00:05:10.663 START TEST locking_app_on_locked_coremask 00:05:10.663 ************************************ 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1008714 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1008714 /var/tmp/spdk.sock 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1008714 ']' 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.663 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 [2024-07-12 16:54:10.319132] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:10.663 [2024-07-12 16:54:10.319230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008714 ] 00:05:10.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.921 [2024-07-12 16:54:10.379842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.921 [2024-07-12 16:54:10.488990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1008838 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1008838 /var/tmp/spdk2.sock 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1008838 /var/tmp/spdk2.sock 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1008838 /var/tmp/spdk2.sock 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1008838 ']' 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.179 16:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.179 [2024-07-12 16:54:10.780801] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:11.179 [2024-07-12 16:54:10.780893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008838 ] 00:05:11.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.179 [2024-07-12 16:54:10.862156] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1008714 has claimed it. 00:05:11.179 [2024-07-12 16:54:10.862210] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1008838) - No such process 00:05:12.110 ERROR: process (pid: 1008838) is no longer running 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1008714 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1008714 00:05:12.110 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.368 lslocks: write error 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1008714 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1008714 ']' 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1008714 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1008714 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1008714' 00:05:12.368 killing process with pid 1008714 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1008714 00:05:12.368 16:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1008714 00:05:12.627 00:05:12.627 real 0m2.034s 00:05:12.627 user 0m2.204s 00:05:12.627 sys 0m0.649s 00:05:12.627 16:54:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.627 16:54:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.627 ************************************ 00:05:12.627 END TEST locking_app_on_locked_coremask 00:05:12.627 ************************************ 00:05:12.886 16:54:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.886 16:54:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.886 16:54:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.886 16:54:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.886 16:54:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.886 ************************************ 00:05:12.886 START TEST locking_overlapped_coremask 00:05:12.886 ************************************ 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1009012 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1009012 /var/tmp/spdk.sock 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1009012 ']' 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.886 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.886 [2024-07-12 16:54:12.410435] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:12.886 [2024-07-12 16:54:12.410534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009012 ] 00:05:12.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.886 [2024-07-12 16:54:12.467775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.144 [2024-07-12 16:54:12.581371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.144 [2024-07-12 16:54:12.581424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.144 [2024-07-12 16:54:12.581427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1009138 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1009138 /var/tmp/spdk2.sock 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1009138 /var/tmp/spdk2.sock 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1009138 /var/tmp/spdk2.sock 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1009138 ']' 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.144 16:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.400 [2024-07-12 16:54:12.883953] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:13.400 [2024-07-12 16:54:12.884042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009138 ] 00:05:13.400 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.400 [2024-07-12 16:54:12.972769] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1009012 has claimed it. 00:05:13.400 [2024-07-12 16:54:12.972831] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1009138) - No such process 00:05:13.964 ERROR: process (pid: 1009138) is no longer running 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1009012 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1009012 ']' 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1009012 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009012 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009012' 00:05:13.964 killing process with pid 1009012 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1009012 00:05:13.964 16:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1009012 00:05:14.531 00:05:14.531 real 0m1.680s 00:05:14.531 user 0m4.439s 00:05:14.531 sys 0m0.464s 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 END TEST locking_overlapped_coremask 00:05:14.531 ************************************ 00:05:14.531 16:54:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.531 16:54:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.531 16:54:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.531 16:54:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.531 16:54:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 START TEST locking_overlapped_coremask_via_rpc 00:05:14.531 ************************************ 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1009300 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1009300 /var/tmp/spdk.sock 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1009300 ']' 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.531 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 [2024-07-12 16:54:14.139380] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:14.531 [2024-07-12 16:54:14.139470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009300 ] 00:05:14.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.531 [2024-07-12 16:54:14.197874] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.531 [2024-07-12 16:54:14.197912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.789 [2024-07-12 16:54:14.307366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.789 [2024-07-12 16:54:14.307427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.789 [2024-07-12 16:54:14.307430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1009314 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1009314 /var/tmp/spdk2.sock 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1009314 ']' 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.047 16:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.047 [2024-07-12 16:54:14.608436] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:15.047 [2024-07-12 16:54:14.608520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009314 ] 00:05:15.047 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.047 [2024-07-12 16:54:14.695301] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.047 [2024-07-12 16:54:14.695342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.305 [2024-07-12 16:54:14.920163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.305 [2024-07-12 16:54:14.923797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:15.305 [2024-07-12 16:54:14.923799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.871 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.871 [2024-07-12 16:54:15.549834] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1009300 has claimed it. 00:05:15.871 request: 00:05:15.871 { 00:05:15.871 "method": "framework_enable_cpumask_locks", 00:05:15.871 "req_id": 1 00:05:15.871 } 00:05:15.871 Got JSON-RPC error response 00:05:15.871 response: 00:05:15.871 { 00:05:15.871 "code": -32603, 00:05:15.871 "message": "Failed to claim CPU core: 2" 00:05:15.871 } 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1009300 /var/tmp/spdk.sock 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1009300 ']' 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.872 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1009314 /var/tmp/spdk2.sock 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1009314 ']' 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.129 16:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.387 00:05:16.387 real 0m1.972s 00:05:16.387 user 0m1.026s 00:05:16.387 sys 0m0.159s 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.387 16:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.387 ************************************ 00:05:16.387 END TEST locking_overlapped_coremask_via_rpc 00:05:16.387 ************************************ 00:05:16.387 16:54:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.387 16:54:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:16.387 16:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1009300 ]] 00:05:16.387 16:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1009300 00:05:16.387 16:54:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1009300 ']' 00:05:16.387 16:54:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1009300 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009300 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009300' 00:05:16.644 killing process with pid 1009300 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1009300 00:05:16.644 16:54:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1009300 00:05:16.903 16:54:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1009314 ]] 00:05:16.903 16:54:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1009314 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1009314 ']' 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1009314 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009314 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009314' 00:05:16.903 killing process with pid 1009314 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1009314 00:05:16.903 16:54:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1009314 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1009300 ]] 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1009300 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1009300 ']' 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1009300 00:05:17.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1009300) - No such process 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1009300 is not found' 00:05:17.469 Process with pid 1009300 is not found 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1009314 ]] 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1009314 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1009314 ']' 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1009314 00:05:17.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1009314) - No such process 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1009314 is not found' 00:05:17.469 Process with pid 1009314 is not found 00:05:17.469 16:54:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.469 00:05:17.469 real 0m16.058s 00:05:17.469 user 0m27.793s 00:05:17.469 sys 0m5.228s 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.469 16:54:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.469 ************************************ 00:05:17.469 END TEST cpu_locks 00:05:17.469 ************************************ 00:05:17.469 16:54:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.469 00:05:17.469 real 0m39.985s 00:05:17.469 user 1m15.738s 00:05:17.469 sys 0m9.206s 00:05:17.469 16:54:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.469 16:54:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.469 ************************************ 00:05:17.469 END TEST event 00:05:17.469 ************************************ 00:05:17.469 16:54:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.469 16:54:17 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.469 16:54:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.469 16:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.469 16:54:17 -- common/autotest_common.sh@10 -- # set +x 00:05:17.469 ************************************ 00:05:17.469 START TEST thread 00:05:17.469 ************************************ 00:05:17.469 16:54:17 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.469 * Looking for test storage... 00:05:17.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:17.727 16:54:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.727 16:54:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:17.727 16:54:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.727 16:54:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.727 ************************************ 00:05:17.727 START TEST thread_poller_perf 00:05:17.727 ************************************ 00:05:17.727 16:54:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.727 [2024-07-12 16:54:17.205323] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:17.727 [2024-07-12 16:54:17.205399] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009715 ] 00:05:17.727 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.727 [2024-07-12 16:54:17.267067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.727 [2024-07-12 16:54:17.377135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.727 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:19.099 ====================================== 00:05:19.099 busy:2712502989 (cyc) 00:05:19.099 total_run_count: 366000 00:05:19.099 tsc_hz: 2700000000 (cyc) 00:05:19.099 ====================================== 00:05:19.099 poller_cost: 7411 (cyc), 2744 (nsec) 00:05:19.099 00:05:19.099 real 0m1.304s 00:05:19.099 user 0m1.212s 00:05:19.099 sys 0m0.087s 00:05:19.099 16:54:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.099 16:54:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.099 ************************************ 00:05:19.099 END TEST thread_poller_perf 00:05:19.099 ************************************ 00:05:19.099 16:54:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:19.099 16:54:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.099 16:54:18 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:19.099 16:54:18 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.099 16:54:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.099 ************************************ 00:05:19.099 START TEST thread_poller_perf 00:05:19.099 ************************************ 00:05:19.099 16:54:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.099 [2024-07-12 16:54:18.563868] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:19.099 [2024-07-12 16:54:18.563936] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009953 ] 00:05:19.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.099 [2024-07-12 16:54:18.622106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.099 [2024-07-12 16:54:18.729894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.099 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.492 ====================================== 00:05:20.492 busy:2702159994 (cyc) 00:05:20.492 total_run_count: 4873000 00:05:20.492 tsc_hz: 2700000000 (cyc) 00:05:20.492 ====================================== 00:05:20.492 poller_cost: 554 (cyc), 205 (nsec) 00:05:20.492 00:05:20.492 real 0m1.294s 00:05:20.492 user 0m1.215s 00:05:20.492 sys 0m0.073s 00:05:20.492 16:54:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.492 16:54:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 ************************************ 00:05:20.492 END TEST thread_poller_perf 00:05:20.492 ************************************ 00:05:20.492 16:54:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:20.492 16:54:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.492 00:05:20.492 real 0m2.753s 00:05:20.492 user 0m2.500s 00:05:20.492 sys 0m0.254s 00:05:20.492 16:54:19 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.492 16:54:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 ************************************ 00:05:20.492 END TEST thread 00:05:20.492 ************************************ 00:05:20.492 16:54:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.492 16:54:19 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.492 16:54:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.492 16:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.492 16:54:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 ************************************ 00:05:20.492 START TEST accel 00:05:20.492 ************************************ 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.492 * Looking for test storage... 00:05:20.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:20.492 16:54:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:20.492 16:54:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:20.492 16:54:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.492 16:54:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1010154 00:05:20.492 16:54:19 accel -- accel/accel.sh@63 -- # waitforlisten 1010154 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@829 -- # '[' -z 1010154 ']' 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.492 16:54:19 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.492 16:54:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.492 16:54:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.492 16:54:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 16:54:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.492 16:54:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.492 16:54:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.492 16:54:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.492 16:54:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:20.492 16:54:19 accel -- accel/accel.sh@41 -- # jq -r . 00:05:20.492 [2024-07-12 16:54:20.019343] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:20.492 [2024-07-12 16:54:20.019450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010154 ] 00:05:20.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.492 [2024-07-12 16:54:20.076788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.492 [2024-07-12 16:54:20.182661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.749 16:54:20 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.749 16:54:20 accel -- common/autotest_common.sh@862 -- # return 0 00:05:20.749 16:54:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:20.749 16:54:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:20.749 16:54:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:20.749 16:54:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:20.749 16:54:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:20.749 16:54:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:20.749 16:54:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:20.749 16:54:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.749 16:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 16:54:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.006 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.006 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.006 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.006 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.006 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.006 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.006 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.007 16:54:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.007 16:54:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.007 16:54:20 accel -- accel/accel.sh@75 -- # killprocess 1010154 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@948 -- # '[' -z 1010154 ']' 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@952 -- # kill -0 1010154 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@953 -- # uname 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010154 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010154' 00:05:21.007 killing process with pid 1010154 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@967 -- # kill 1010154 00:05:21.007 16:54:20 accel -- common/autotest_common.sh@972 -- # wait 1010154 00:05:21.266 16:54:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:21.266 16:54:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:21.266 16:54:20 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:21.266 16:54:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.266 16:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.266 16:54:20 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:21.266 16:54:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:21.525 16:54:20 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.525 16:54:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:21.525 16:54:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.525 16:54:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:21.525 16:54:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:21.525 16:54:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.525 16:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.525 ************************************ 00:05:21.525 START TEST accel_missing_filename 00:05:21.525 ************************************ 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.525 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:21.525 16:54:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:21.525 [2024-07-12 16:54:21.038557] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:21.525 [2024-07-12 16:54:21.038615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010324 ] 00:05:21.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.525 [2024-07-12 16:54:21.094473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.525 [2024-07-12 16:54:21.197175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.783 [2024-07-12 16:54:21.255051] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.783 [2024-07-12 16:54:21.329481] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:21.783 A filename is required. 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.783 00:05:21.783 real 0m0.419s 00:05:21.783 user 0m0.325s 00:05:21.783 sys 0m0.125s 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.783 16:54:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 ************************************ 00:05:21.783 END TEST accel_missing_filename 00:05:21.783 ************************************ 00:05:21.783 16:54:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.783 16:54:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:21.783 16:54:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:21.783 16:54:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.783 16:54:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.041 ************************************ 00:05:22.041 START TEST accel_compress_verify 00:05:22.041 ************************************ 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.041 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:22.041 16:54:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:22.041 [2024-07-12 16:54:21.513759] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:22.041 [2024-07-12 16:54:21.513832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010351 ] 00:05:22.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.041 [2024-07-12 16:54:21.570148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.041 [2024-07-12 16:54:21.674173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.298 [2024-07-12 16:54:21.735176] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.298 [2024-07-12 16:54:21.812301] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:22.298 00:05:22.298 Compression does not support the verify option, aborting. 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.298 00:05:22.298 real 0m0.431s 00:05:22.298 user 0m0.332s 00:05:22.298 sys 0m0.134s 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.298 16:54:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:22.298 ************************************ 00:05:22.298 END TEST accel_compress_verify 00:05:22.298 ************************************ 00:05:22.298 16:54:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.298 16:54:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:22.298 16:54:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:22.298 16:54:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.298 16:54:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.298 ************************************ 00:05:22.298 START TEST accel_wrong_workload 00:05:22.298 ************************************ 00:05:22.298 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.299 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:22.299 16:54:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:22.299 Unsupported workload type: foobar 00:05:22.299 [2024-07-12 16:54:21.989449] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:22.556 accel_perf options: 00:05:22.556 [-h help message] 00:05:22.556 [-q queue depth per core] 00:05:22.556 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.556 [-T number of threads per core 00:05:22.556 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.556 [-t time in seconds] 00:05:22.556 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.556 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:22.556 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.556 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.556 [-S for crc32c workload, use this seed value (default 0) 00:05:22.556 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.556 [-f for fill workload, use this BYTE value (default 255) 00:05:22.556 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.556 [-y verify result if this switch is on] 00:05:22.556 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.556 Can be used to spread operations across a wider range of memory. 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.556 00:05:22.556 real 0m0.022s 00:05:22.556 user 0m0.015s 00:05:22.556 sys 0m0.006s 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.556 16:54:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:22.556 ************************************ 00:05:22.556 END TEST accel_wrong_workload 00:05:22.556 ************************************ 00:05:22.556 Error: writing output failed: Broken pipe 00:05:22.556 16:54:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.556 16:54:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.556 16:54:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:22.556 16:54:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.556 16:54:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.556 ************************************ 00:05:22.556 START TEST accel_negative_buffers 00:05:22.556 ************************************ 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.556 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:22.556 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:22.556 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:22.556 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.556 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.556 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.557 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.557 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.557 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:22.557 16:54:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:22.557 -x option must be non-negative. 00:05:22.557 [2024-07-12 16:54:22.058529] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:22.557 accel_perf options: 00:05:22.557 [-h help message] 00:05:22.557 [-q queue depth per core] 00:05:22.557 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.557 [-T number of threads per core 00:05:22.557 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.557 [-t time in seconds] 00:05:22.557 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.557 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:22.557 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.557 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.557 [-S for crc32c workload, use this seed value (default 0) 00:05:22.557 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.557 [-f for fill workload, use this BYTE value (default 255) 00:05:22.557 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.557 [-y verify result if this switch is on] 00:05:22.557 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.557 Can be used to spread operations across a wider range of memory. 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.557 00:05:22.557 real 0m0.023s 00:05:22.557 user 0m0.013s 00:05:22.557 sys 0m0.009s 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.557 16:54:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:22.557 ************************************ 00:05:22.557 END TEST accel_negative_buffers 00:05:22.557 ************************************ 00:05:22.557 Error: writing output failed: Broken pipe 00:05:22.557 16:54:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.557 16:54:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:22.557 16:54:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:22.557 16:54:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.557 16:54:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.557 ************************************ 00:05:22.557 START TEST accel_crc32c 00:05:22.557 ************************************ 00:05:22.557 16:54:22 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:22.557 16:54:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:22.557 [2024-07-12 16:54:22.128717] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:22.557 [2024-07-12 16:54:22.128819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010531 ] 00:05:22.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.557 [2024-07-12 16:54:22.186715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.814 [2024-07-12 16:54:22.288894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.814 16:54:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.182 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.182 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.182 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.182 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:24.183 16:54:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.183 00:05:24.183 real 0m1.422s 00:05:24.183 user 0m1.293s 00:05:24.183 sys 0m0.131s 00:05:24.183 16:54:23 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.183 16:54:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:24.183 ************************************ 00:05:24.183 END TEST accel_crc32c 00:05:24.183 ************************************ 00:05:24.183 16:54:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.183 16:54:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:24.183 16:54:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:24.183 16:54:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.183 16:54:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.183 ************************************ 00:05:24.183 START TEST accel_crc32c_C2 00:05:24.183 ************************************ 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:24.183 [2024-07-12 16:54:23.600123] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:24.183 [2024-07-12 16:54:23.600186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010689 ] 00:05:24.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.183 [2024-07-12 16:54:23.657593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.183 [2024-07-12 16:54:23.763499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.183 16:54:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.554 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.555 00:05:25.555 real 0m1.435s 00:05:25.555 user 0m1.302s 00:05:25.555 sys 0m0.136s 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.555 16:54:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:25.555 ************************************ 00:05:25.555 END TEST accel_crc32c_C2 00:05:25.555 ************************************ 00:05:25.555 16:54:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.555 16:54:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:25.555 16:54:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:25.555 16:54:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.555 16:54:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.555 ************************************ 00:05:25.555 START TEST accel_copy 00:05:25.555 ************************************ 00:05:25.555 16:54:25 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:25.555 16:54:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:25.555 [2024-07-12 16:54:25.087223] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:25.555 [2024-07-12 16:54:25.087286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010851 ] 00:05:25.555 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.555 [2024-07-12 16:54:25.147528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.813 [2024-07-12 16:54:25.254867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.813 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.814 16:54:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:27.187 16:54:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.187 00:05:27.187 real 0m1.436s 00:05:27.187 user 0m1.299s 00:05:27.187 sys 0m0.138s 00:05:27.187 16:54:26 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.187 16:54:26 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:27.187 ************************************ 00:05:27.187 END TEST accel_copy 00:05:27.187 ************************************ 00:05:27.187 16:54:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.187 16:54:26 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.187 16:54:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:27.187 16:54:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.187 16:54:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.187 ************************************ 00:05:27.187 START TEST accel_fill 00:05:27.187 ************************************ 00:05:27.187 16:54:26 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:27.187 [2024-07-12 16:54:26.572877] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:27.187 [2024-07-12 16:54:26.572942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011117 ] 00:05:27.187 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.187 [2024-07-12 16:54:26.629951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.187 [2024-07-12 16:54:26.736603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.187 16:54:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:28.561 16:54:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.561 00:05:28.561 real 0m1.436s 00:05:28.561 user 0m1.305s 00:05:28.561 sys 0m0.133s 00:05:28.561 16:54:27 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.561 16:54:27 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:28.561 ************************************ 00:05:28.561 END TEST accel_fill 00:05:28.561 ************************************ 00:05:28.561 16:54:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.561 16:54:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:28.561 16:54:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.561 16:54:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.561 16:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.561 ************************************ 00:05:28.561 START TEST accel_copy_crc32c 00:05:28.561 ************************************ 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:28.561 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:28.561 [2024-07-12 16:54:28.056768] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:28.561 [2024-07-12 16:54:28.056836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011281 ] 00:05:28.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.561 [2024-07-12 16:54:28.116208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.561 [2024-07-12 16:54:28.219332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.820 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.821 16:54:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.826 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.827 00:05:29.827 real 0m1.429s 00:05:29.827 user 0m1.294s 00:05:29.827 sys 0m0.136s 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.827 16:54:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:29.827 ************************************ 00:05:29.827 END TEST accel_copy_crc32c 00:05:29.827 ************************************ 00:05:29.827 16:54:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.827 16:54:29 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:29.827 16:54:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:29.827 16:54:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.827 16:54:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.827 ************************************ 00:05:29.827 START TEST accel_copy_crc32c_C2 00:05:29.827 ************************************ 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:29.827 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:30.086 [2024-07-12 16:54:29.531354] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:30.086 [2024-07-12 16:54:29.531417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011440 ] 00:05:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.086 [2024-07-12 16:54:29.588070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.086 [2024-07-12 16:54:29.694528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.086 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.087 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.087 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.087 16:54:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.461 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.462 00:05:31.462 real 0m1.436s 00:05:31.462 user 0m1.301s 00:05:31.462 sys 0m0.137s 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.462 16:54:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:31.462 ************************************ 00:05:31.462 END TEST accel_copy_crc32c_C2 00:05:31.462 ************************************ 00:05:31.462 16:54:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.462 16:54:30 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:31.462 16:54:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.462 16:54:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.462 16:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.462 ************************************ 00:05:31.462 START TEST accel_dualcast 00:05:31.462 ************************************ 00:05:31.462 16:54:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:31.462 16:54:30 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:31.462 [2024-07-12 16:54:31.015214] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:31.462 [2024-07-12 16:54:31.015277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011714 ] 00:05:31.462 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.462 [2024-07-12 16:54:31.072138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.721 [2024-07-12 16:54:31.178446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.721 16:54:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:33.096 16:54:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.096 00:05:33.096 real 0m1.437s 00:05:33.096 user 0m1.295s 00:05:33.096 sys 0m0.143s 00:05:33.096 16:54:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.096 16:54:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 ************************************ 00:05:33.096 END TEST accel_dualcast 00:05:33.096 ************************************ 00:05:33.096 16:54:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.096 16:54:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:33.096 16:54:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.096 16:54:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.096 16:54:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.096 ************************************ 00:05:33.096 START TEST accel_compare 00:05:33.096 ************************************ 00:05:33.096 16:54:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:33.096 [2024-07-12 16:54:32.504433] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:33.096 [2024-07-12 16:54:32.504508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011869 ] 00:05:33.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.096 [2024-07-12 16:54:32.562371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.096 [2024-07-12 16:54:32.664181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.096 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.097 16:54:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:34.470 16:54:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.470 00:05:34.470 real 0m1.431s 00:05:34.470 user 0m1.291s 00:05:34.470 sys 0m0.142s 00:05:34.471 16:54:33 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.471 16:54:33 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:34.471 ************************************ 00:05:34.471 END TEST accel_compare 00:05:34.471 ************************************ 00:05:34.471 16:54:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.471 16:54:33 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:34.471 16:54:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.471 16:54:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.471 16:54:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.471 ************************************ 00:05:34.471 START TEST accel_xor 00:05:34.471 ************************************ 00:05:34.471 16:54:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:34.471 16:54:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:34.471 [2024-07-12 16:54:33.980993] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:34.471 [2024-07-12 16:54:33.981064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012031 ] 00:05:34.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.471 [2024-07-12 16:54:34.038115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.471 [2024-07-12 16:54:34.142904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.729 16:54:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.104 00:05:36.104 real 0m1.430s 00:05:36.104 user 0m1.295s 00:05:36.104 sys 0m0.135s 00:05:36.104 16:54:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.104 16:54:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:36.104 ************************************ 00:05:36.104 END TEST accel_xor 00:05:36.104 ************************************ 00:05:36.104 16:54:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.104 16:54:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:36.104 16:54:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:36.104 16:54:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.104 16:54:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.104 ************************************ 00:05:36.104 START TEST accel_xor 00:05:36.104 ************************************ 00:05:36.104 16:54:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:36.104 [2024-07-12 16:54:35.460617] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:36.104 [2024-07-12 16:54:35.460684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012291 ] 00:05:36.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.104 [2024-07-12 16:54:35.518422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.104 [2024-07-12 16:54:35.623235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.104 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.105 16:54:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:37.478 16:54:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.478 00:05:37.478 real 0m1.436s 00:05:37.478 user 0m1.300s 00:05:37.478 sys 0m0.137s 00:05:37.479 16:54:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.479 16:54:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:37.479 ************************************ 00:05:37.479 END TEST accel_xor 00:05:37.479 ************************************ 00:05:37.479 16:54:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.479 16:54:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:37.479 16:54:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:37.479 16:54:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.479 16:54:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.479 ************************************ 00:05:37.479 START TEST accel_dif_verify 00:05:37.479 ************************************ 00:05:37.479 16:54:36 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:37.479 16:54:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:37.479 [2024-07-12 16:54:36.945366] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:37.479 [2024-07-12 16:54:36.945429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012460 ] 00:05:37.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.479 [2024-07-12 16:54:37.004866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.479 [2024-07-12 16:54:37.110394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:37.479 16:54:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:38.850 16:54:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.850 00:05:38.850 real 0m1.419s 00:05:38.850 user 0m1.289s 00:05:38.850 sys 0m0.133s 00:05:38.850 16:54:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.850 16:54:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:38.850 ************************************ 00:05:38.850 END TEST accel_dif_verify 00:05:38.850 ************************************ 00:05:38.850 16:54:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.850 16:54:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:38.850 16:54:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:38.850 16:54:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.850 16:54:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.850 ************************************ 00:05:38.850 START TEST accel_dif_generate 00:05:38.850 ************************************ 00:05:38.850 16:54:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:38.850 16:54:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:38.850 [2024-07-12 16:54:38.414851] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:38.850 [2024-07-12 16:54:38.414913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012616 ] 00:05:38.850 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.851 [2024-07-12 16:54:38.473356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.109 [2024-07-12 16:54:38.577904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.109 16:54:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:40.482 16:54:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.482 00:05:40.482 real 0m1.427s 00:05:40.482 user 0m1.293s 00:05:40.482 sys 0m0.138s 00:05:40.482 16:54:39 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.482 16:54:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:40.482 ************************************ 00:05:40.482 END TEST accel_dif_generate 00:05:40.482 ************************************ 00:05:40.482 16:54:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.482 16:54:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:40.482 16:54:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.482 16:54:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.482 16:54:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.482 ************************************ 00:05:40.482 START TEST accel_dif_generate_copy 00:05:40.482 ************************************ 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:40.482 16:54:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:40.482 [2024-07-12 16:54:39.891751] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:40.482 [2024-07-12 16:54:39.891826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012781 ] 00:05:40.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.483 [2024-07-12 16:54:39.949977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.483 [2024-07-12 16:54:40.067071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.483 16:54:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.855 00:05:41.855 real 0m1.449s 00:05:41.855 user 0m1.315s 00:05:41.855 sys 0m0.136s 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.855 16:54:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:41.855 ************************************ 00:05:41.855 END TEST accel_dif_generate_copy 00:05:41.855 ************************************ 00:05:41.855 16:54:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.855 16:54:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:41.855 16:54:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.855 16:54:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:41.855 16:54:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.855 16:54:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.855 ************************************ 00:05:41.855 START TEST accel_comp 00:05:41.855 ************************************ 00:05:41.855 16:54:41 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.855 16:54:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:41.855 16:54:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:41.855 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:41.855 16:54:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.855 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:41.856 16:54:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:41.856 [2024-07-12 16:54:41.391211] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:41.856 [2024-07-12 16:54:41.391275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013046 ] 00:05:41.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.856 [2024-07-12 16:54:41.449227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.114 [2024-07-12 16:54:41.555769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.114 16:54:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:43.487 16:54:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.487 00:05:43.487 real 0m1.443s 00:05:43.487 user 0m1.309s 00:05:43.487 sys 0m0.137s 00:05:43.487 16:54:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.487 16:54:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:43.487 ************************************ 00:05:43.487 END TEST accel_comp 00:05:43.487 ************************************ 00:05:43.487 16:54:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.487 16:54:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.487 16:54:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:43.487 16:54:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.487 16:54:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.487 ************************************ 00:05:43.487 START TEST accel_decomp 00:05:43.487 ************************************ 00:05:43.487 16:54:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:43.487 16:54:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:43.487 [2024-07-12 16:54:42.883458] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:43.487 [2024-07-12 16:54:42.883520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013203 ] 00:05:43.487 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.487 [2024-07-12 16:54:42.940259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.487 [2024-07-12 16:54:43.044447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.487 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:43.488 16:54:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.859 16:54:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.859 00:05:44.859 real 0m1.432s 00:05:44.859 user 0m1.309s 00:05:44.859 sys 0m0.125s 00:05:44.859 16:54:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.859 16:54:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:44.859 ************************************ 00:05:44.859 END TEST accel_decomp 00:05:44.859 ************************************ 00:05:44.859 16:54:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.859 16:54:44 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:44.859 16:54:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:44.859 16:54:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.859 16:54:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.859 ************************************ 00:05:44.859 START TEST accel_decomp_full 00:05:44.859 ************************************ 00:05:44.859 16:54:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:44.859 16:54:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:44.859 [2024-07-12 16:54:44.364019] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:44.859 [2024-07-12 16:54:44.364092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013366 ] 00:05:44.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.859 [2024-07-12 16:54:44.422453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.859 [2024-07-12 16:54:44.529168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.117 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.118 16:54:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.492 16:54:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.492 00:05:46.492 real 0m1.447s 00:05:46.492 user 0m1.325s 00:05:46.492 sys 0m0.124s 00:05:46.492 16:54:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.492 16:54:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:46.492 ************************************ 00:05:46.492 END TEST accel_decomp_full 00:05:46.492 ************************************ 00:05:46.492 16:54:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.492 16:54:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.492 16:54:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:46.492 16:54:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.492 16:54:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.492 ************************************ 00:05:46.492 START TEST accel_decomp_mcore 00:05:46.492 ************************************ 00:05:46.492 16:54:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.492 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:46.493 16:54:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:46.493 [2024-07-12 16:54:45.856916] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:46.493 [2024-07-12 16:54:45.856978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013634 ] 00:05:46.493 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.493 [2024-07-12 16:54:45.915497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.493 [2024-07-12 16:54:46.023226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.493 [2024-07-12 16:54:46.023290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.493 [2024-07-12 16:54:46.023356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.493 [2024-07-12 16:54:46.023359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.493 16:54:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.866 00:05:47.866 real 0m1.454s 00:05:47.866 user 0m4.758s 00:05:47.866 sys 0m0.153s 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.866 16:54:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:47.866 ************************************ 00:05:47.866 END TEST accel_decomp_mcore 00:05:47.866 ************************************ 00:05:47.866 16:54:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.866 16:54:47 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.866 16:54:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:47.866 16:54:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.866 16:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.866 ************************************ 00:05:47.866 START TEST accel_decomp_full_mcore 00:05:47.866 ************************************ 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:47.866 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:47.866 [2024-07-12 16:54:47.352842] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:47.866 [2024-07-12 16:54:47.352904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013794 ] 00:05:47.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.866 [2024-07-12 16:54:47.409626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.866 [2024-07-12 16:54:47.514345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.866 [2024-07-12 16:54:47.514452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.866 [2024-07-12 16:54:47.514526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.866 [2024-07-12 16:54:47.514529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.125 16:54:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.499 00:05:49.499 real 0m1.445s 00:05:49.499 user 0m4.743s 00:05:49.499 sys 0m0.139s 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.499 16:54:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:49.499 ************************************ 00:05:49.499 END TEST accel_decomp_full_mcore 00:05:49.499 ************************************ 00:05:49.499 16:54:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.499 16:54:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:49.499 16:54:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:49.499 16:54:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.499 16:54:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.499 ************************************ 00:05:49.499 START TEST accel_decomp_mthread 00:05:49.499 ************************************ 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:49.499 16:54:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:49.499 [2024-07-12 16:54:48.846365] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:49.499 [2024-07-12 16:54:48.846428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013957 ] 00:05:49.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.500 [2024-07-12 16:54:48.902996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.500 [2024-07-12 16:54:49.007529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.500 16:54:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.874 00:05:50.874 real 0m1.434s 00:05:50.874 user 0m1.313s 00:05:50.874 sys 0m0.123s 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.874 16:54:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:50.874 ************************************ 00:05:50.874 END TEST accel_decomp_mthread 00:05:50.874 ************************************ 00:05:50.874 16:54:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.874 16:54:50 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:50.874 16:54:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:50.874 16:54:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.874 16:54:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.874 ************************************ 00:05:50.874 START TEST accel_decomp_full_mthread 00:05:50.874 ************************************ 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.874 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:50.875 [2024-07-12 16:54:50.329311] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:50.875 [2024-07-12 16:54:50.329375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014232 ] 00:05:50.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.875 [2024-07-12 16:54:50.386708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.875 [2024-07-12 16:54:50.490871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.875 16:54:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.244 00:05:52.244 real 0m1.463s 00:05:52.244 user 0m1.327s 00:05:52.244 sys 0m0.138s 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.244 16:54:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:52.244 ************************************ 00:05:52.244 END TEST accel_decomp_full_mthread 00:05:52.244 ************************************ 00:05:52.244 16:54:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.244 16:54:51 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:52.244 16:54:51 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:52.244 16:54:51 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:52.244 16:54:51 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.244 16:54:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.244 16:54:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.244 16:54:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.244 16:54:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.244 16:54:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.244 16:54:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.244 16:54:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.244 16:54:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:52.244 16:54:51 accel -- accel/accel.sh@41 -- # jq -r . 00:05:52.244 ************************************ 00:05:52.244 START TEST accel_dif_functional_tests 00:05:52.244 ************************************ 00:05:52.244 16:54:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:52.244 [2024-07-12 16:54:51.861318] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:52.244 [2024-07-12 16:54:51.861384] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014386 ] 00:05:52.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.244 [2024-07-12 16:54:51.917289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.501 [2024-07-12 16:54:52.030441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.501 [2024-07-12 16:54:52.030554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.501 [2024-07-12 16:54:52.030561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.501 00:05:52.501 00:05:52.501 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.501 http://cunit.sourceforge.net/ 00:05:52.501 00:05:52.501 00:05:52.501 Suite: accel_dif 00:05:52.501 Test: verify: DIF generated, GUARD check ...passed 00:05:52.501 Test: verify: DIF generated, APPTAG check ...passed 00:05:52.501 Test: verify: DIF generated, REFTAG check ...passed 00:05:52.501 Test: verify: DIF not generated, GUARD check ...[2024-07-12 16:54:52.119227] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:52.501 passed 00:05:52.501 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 16:54:52.119296] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:52.501 passed 00:05:52.501 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 16:54:52.119326] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:52.501 passed 00:05:52.501 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:52.501 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 16:54:52.119385] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:52.501 passed 00:05:52.501 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:52.501 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:52.501 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:52.501 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 16:54:52.119517] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:52.501 passed 00:05:52.501 Test: verify copy: DIF generated, GUARD check ...passed 00:05:52.501 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:52.501 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:52.501 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 16:54:52.119702] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:52.501 passed 00:05:52.501 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 16:54:52.119766] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:52.501 passed 00:05:52.502 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 16:54:52.119805] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:52.502 passed 00:05:52.502 Test: generate copy: DIF generated, GUARD check ...passed 00:05:52.502 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:52.502 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:52.502 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:52.502 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:52.502 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:52.502 Test: generate copy: iovecs-len validate ...[2024-07-12 16:54:52.120067] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:52.502 passed 00:05:52.502 Test: generate copy: buffer alignment validate ...passed 00:05:52.502 00:05:52.502 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.502 suites 1 1 n/a 0 0 00:05:52.502 tests 26 26 26 0 0 00:05:52.502 asserts 115 115 115 0 n/a 00:05:52.502 00:05:52.502 Elapsed time = 0.003 seconds 00:05:52.759 00:05:52.759 real 0m0.530s 00:05:52.759 user 0m0.797s 00:05:52.759 sys 0m0.166s 00:05:52.759 16:54:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.759 16:54:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:52.759 ************************************ 00:05:52.759 END TEST accel_dif_functional_tests 00:05:52.759 ************************************ 00:05:52.759 16:54:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.759 00:05:52.759 real 0m32.460s 00:05:52.759 user 0m36.050s 00:05:52.759 sys 0m4.332s 00:05:52.759 16:54:52 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.759 16:54:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.759 ************************************ 00:05:52.759 END TEST accel 00:05:52.759 ************************************ 00:05:52.759 16:54:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.759 16:54:52 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:52.759 16:54:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.759 16:54:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.759 16:54:52 -- common/autotest_common.sh@10 -- # set +x 00:05:52.759 ************************************ 00:05:52.759 START TEST accel_rpc 00:05:52.759 ************************************ 00:05:52.759 16:54:52 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:53.016 * Looking for test storage... 00:05:53.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:53.016 16:54:52 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.016 16:54:52 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1014515 00:05:53.016 16:54:52 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:53.016 16:54:52 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1014515 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1014515 ']' 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.016 16:54:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.016 [2024-07-12 16:54:52.531932] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:53.016 [2024-07-12 16:54:52.532023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014515 ] 00:05:53.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.016 [2024-07-12 16:54:52.589813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.016 [2024-07-12 16:54:52.695232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.272 16:54:52 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.272 16:54:52 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.272 16:54:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:53.272 16:54:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:53.272 16:54:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:53.272 16:54:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:53.272 16:54:52 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:53.272 16:54:52 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.272 16:54:52 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.272 16:54:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.272 ************************************ 00:05:53.272 START TEST accel_assign_opcode 00:05:53.272 ************************************ 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:53.272 [2024-07-12 16:54:52.763858] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:53.272 [2024-07-12 16:54:52.771871] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.272 16:54:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.528 software 00:05:53.528 00:05:53.528 real 0m0.282s 00:05:53.528 user 0m0.040s 00:05:53.528 sys 0m0.006s 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.528 16:54:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 ************************************ 00:05:53.528 END TEST accel_assign_opcode 00:05:53.528 ************************************ 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.528 16:54:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1014515 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1014515 ']' 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1014515 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014515 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014515' 00:05:53.528 killing process with pid 1014515 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 1014515 00:05:53.528 16:54:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 1014515 00:05:54.094 00:05:54.094 real 0m1.100s 00:05:54.094 user 0m1.041s 00:05:54.094 sys 0m0.414s 00:05:54.094 16:54:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.094 16:54:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.094 ************************************ 00:05:54.094 END TEST accel_rpc 00:05:54.094 ************************************ 00:05:54.094 16:54:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.094 16:54:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.094 16:54:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.094 16:54:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.094 16:54:53 -- common/autotest_common.sh@10 -- # set +x 00:05:54.094 ************************************ 00:05:54.094 START TEST app_cmdline 00:05:54.094 ************************************ 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.094 * Looking for test storage... 00:05:54.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.094 16:54:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:54.094 16:54:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1014733 00:05:54.094 16:54:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:54.094 16:54:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1014733 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1014733 ']' 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.094 16:54:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.094 [2024-07-12 16:54:53.684688] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:05:54.094 [2024-07-12 16:54:53.684803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014733 ] 00:05:54.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.094 [2024-07-12 16:54:53.742489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.352 [2024-07-12 16:54:53.848763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.610 16:54:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.610 16:54:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:54.610 16:54:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:54.868 { 00:05:54.868 "version": "SPDK v24.09-pre git sha1 d4b4edb37", 00:05:54.868 "fields": { 00:05:54.868 "major": 24, 00:05:54.868 "minor": 9, 00:05:54.868 "patch": 0, 00:05:54.868 "suffix": "-pre", 00:05:54.868 "commit": "d4b4edb37" 00:05:54.868 } 00:05:54.868 } 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:54.868 16:54:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:54.868 16:54:54 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.146 request: 00:05:55.146 { 00:05:55.146 "method": "env_dpdk_get_mem_stats", 00:05:55.146 "req_id": 1 00:05:55.146 } 00:05:55.146 Got JSON-RPC error response 00:05:55.146 response: 00:05:55.146 { 00:05:55.146 "code": -32601, 00:05:55.146 "message": "Method not found" 00:05:55.146 } 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.146 16:54:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1014733 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1014733 ']' 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1014733 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014733 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014733' 00:05:55.146 killing process with pid 1014733 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@967 -- # kill 1014733 00:05:55.146 16:54:54 app_cmdline -- common/autotest_common.sh@972 -- # wait 1014733 00:05:55.411 00:05:55.411 real 0m1.503s 00:05:55.411 user 0m1.828s 00:05:55.412 sys 0m0.448s 00:05:55.412 16:54:55 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.412 16:54:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.412 ************************************ 00:05:55.412 END TEST app_cmdline 00:05:55.412 ************************************ 00:05:55.670 16:54:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.670 16:54:55 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:55.670 16:54:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.670 16:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.670 16:54:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 ************************************ 00:05:55.670 START TEST version 00:05:55.670 ************************************ 00:05:55.670 16:54:55 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:55.670 * Looking for test storage... 00:05:55.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:55.670 16:54:55 version -- app/version.sh@17 -- # get_header_version major 00:05:55.670 16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # cut -f2 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.670 16:54:55 version -- app/version.sh@17 -- # major=24 00:05:55.670 16:54:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:55.670 16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # cut -f2 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.670 16:54:55 version -- app/version.sh@18 -- # minor=9 00:05:55.670 16:54:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:55.670 16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # cut -f2 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.670 16:54:55 version -- app/version.sh@19 -- # patch=0 00:05:55.670 16:54:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:55.670 16:54:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # cut -f2 00:05:55.670 16:54:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.670 16:54:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:55.670 16:54:55 version -- app/version.sh@22 -- # version=24.9 00:05:55.670 16:54:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:55.670 16:54:55 version -- app/version.sh@28 -- # version=24.9rc0 00:05:55.670 16:54:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:55.670 16:54:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:55.670 16:54:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:55.670 16:54:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:55.670 00:05:55.670 real 0m0.108s 00:05:55.670 user 0m0.059s 00:05:55.670 sys 0m0.072s 00:05:55.670 16:54:55 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.670 16:54:55 version -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 ************************************ 00:05:55.670 END TEST version 00:05:55.670 ************************************ 00:05:55.670 16:54:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.670 16:54:55 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@198 -- # uname -s 00:05:55.670 16:54:55 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:55.670 16:54:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:55.670 16:54:55 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:55.670 16:54:55 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:55.670 16:54:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.670 16:54:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 16:54:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:55.670 16:54:55 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:55.670 16:54:55 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:55.670 16:54:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:55.670 16:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.670 16:54:55 -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 ************************************ 00:05:55.670 START TEST nvmf_tcp 00:05:55.670 ************************************ 00:05:55.670 16:54:55 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:55.670 * Looking for test storage... 00:05:55.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.930 16:54:55 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.930 16:54:55 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.930 16:54:55 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.930 16:54:55 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:55.930 16:54:55 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:55.930 16:54:55 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.930 16:54:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:55.930 16:54:55 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:55.930 16:54:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:55.930 16:54:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.930 16:54:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.930 ************************************ 00:05:55.930 START TEST nvmf_example 00:05:55.930 ************************************ 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:55.930 * Looking for test storage... 00:05:55.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:55.930 16:54:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:05:58.467 Found 0000:84:00.0 (0x8086 - 0x159b) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:05:58.467 Found 0000:84:00.1 (0x8086 - 0x159b) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:58.467 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:05:58.468 Found net devices under 0000:84:00.0: cvl_0_0 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:05:58.468 Found net devices under 0000:84:00.1: cvl_0_1 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:58.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:58.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:05:58.468 00:05:58.468 --- 10.0.0.2 ping statistics --- 00:05:58.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.468 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:58.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:58.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:05:58.468 00:05:58.468 --- 10.0.0.1 ping statistics --- 00:05:58.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:58.468 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1016701 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1016701 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1016701 ']' 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.468 16:54:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:58.468 16:54:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:58.468 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.666 Initializing NVMe Controllers 00:06:10.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:10.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:10.666 Initialization complete. Launching workers. 00:06:10.666 ======================================================== 00:06:10.666 Latency(us) 00:06:10.666 Device Information : IOPS MiB/s Average min max 00:06:10.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14770.50 57.70 4335.35 862.30 16143.55 00:06:10.666 ======================================================== 00:06:10.666 Total : 14770.50 57.70 4335.35 862.30 16143.55 00:06:10.666 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:10.666 rmmod nvme_tcp 00:06:10.666 rmmod nvme_fabrics 00:06:10.666 rmmod nvme_keyring 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1016701 ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1016701 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1016701 ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1016701 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1016701 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1016701' 00:06:10.666 killing process with pid 1016701 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1016701 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1016701 00:06:10.666 nvmf threads initialize successfully 00:06:10.666 bdev subsystem init successfully 00:06:10.666 created a nvmf target service 00:06:10.666 create targets's poll groups done 00:06:10.666 all subsystems of target started 00:06:10.666 nvmf target is running 00:06:10.666 all subsystems of target stopped 00:06:10.666 destroy targets's poll groups done 00:06:10.666 destroyed the nvmf target service 00:06:10.666 bdev subsystem finish successfully 00:06:10.666 nvmf threads destroy successfully 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:10.666 16:55:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.229 16:55:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:11.229 16:55:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:11.229 16:55:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.229 16:55:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.229 00:06:11.229 real 0m15.408s 00:06:11.229 user 0m42.311s 00:06:11.229 sys 0m3.568s 00:06:11.230 16:55:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.230 16:55:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:11.230 ************************************ 00:06:11.230 END TEST nvmf_example 00:06:11.230 ************************************ 00:06:11.230 16:55:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:11.230 16:55:10 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.230 16:55:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:11.230 16:55:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.230 16:55:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.230 ************************************ 00:06:11.230 START TEST nvmf_filesystem 00:06:11.230 ************************************ 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:11.230 * Looking for test storage... 00:06:11.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:11.230 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:11.491 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:11.491 #define SPDK_CONFIG_H 00:06:11.491 #define SPDK_CONFIG_APPS 1 00:06:11.491 #define SPDK_CONFIG_ARCH native 00:06:11.491 #undef SPDK_CONFIG_ASAN 00:06:11.491 #undef SPDK_CONFIG_AVAHI 00:06:11.491 #undef SPDK_CONFIG_CET 00:06:11.491 #define SPDK_CONFIG_COVERAGE 1 00:06:11.491 #define SPDK_CONFIG_CROSS_PREFIX 00:06:11.492 #undef SPDK_CONFIG_CRYPTO 00:06:11.492 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:11.492 #undef SPDK_CONFIG_CUSTOMOCF 00:06:11.492 #undef SPDK_CONFIG_DAOS 00:06:11.492 #define SPDK_CONFIG_DAOS_DIR 00:06:11.492 #define SPDK_CONFIG_DEBUG 1 00:06:11.492 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:11.492 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:11.492 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:11.492 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:11.492 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:11.492 #undef SPDK_CONFIG_DPDK_UADK 00:06:11.492 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:11.492 #define SPDK_CONFIG_EXAMPLES 1 00:06:11.492 #undef SPDK_CONFIG_FC 00:06:11.492 #define SPDK_CONFIG_FC_PATH 00:06:11.492 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:11.492 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:11.492 #undef SPDK_CONFIG_FUSE 00:06:11.492 #undef SPDK_CONFIG_FUZZER 00:06:11.492 #define SPDK_CONFIG_FUZZER_LIB 00:06:11.492 #undef SPDK_CONFIG_GOLANG 00:06:11.492 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:11.492 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:11.492 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:11.492 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:11.492 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:11.492 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:11.492 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:11.492 #define SPDK_CONFIG_IDXD 1 00:06:11.492 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:11.492 #undef SPDK_CONFIG_IPSEC_MB 00:06:11.492 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:11.492 #define SPDK_CONFIG_ISAL 1 00:06:11.492 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:11.492 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:11.492 #define SPDK_CONFIG_LIBDIR 00:06:11.492 #undef SPDK_CONFIG_LTO 00:06:11.492 #define SPDK_CONFIG_MAX_LCORES 128 00:06:11.492 #define SPDK_CONFIG_NVME_CUSE 1 00:06:11.492 #undef SPDK_CONFIG_OCF 00:06:11.492 #define SPDK_CONFIG_OCF_PATH 00:06:11.492 #define SPDK_CONFIG_OPENSSL_PATH 00:06:11.492 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:11.492 #define SPDK_CONFIG_PGO_DIR 00:06:11.492 #undef SPDK_CONFIG_PGO_USE 00:06:11.492 #define SPDK_CONFIG_PREFIX /usr/local 00:06:11.492 #undef SPDK_CONFIG_RAID5F 00:06:11.492 #undef SPDK_CONFIG_RBD 00:06:11.492 #define SPDK_CONFIG_RDMA 1 00:06:11.492 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:11.492 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:11.492 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:11.492 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:11.492 #define SPDK_CONFIG_SHARED 1 00:06:11.492 #undef SPDK_CONFIG_SMA 00:06:11.492 #define SPDK_CONFIG_TESTS 1 00:06:11.492 #undef SPDK_CONFIG_TSAN 00:06:11.492 #define SPDK_CONFIG_UBLK 1 00:06:11.492 #define SPDK_CONFIG_UBSAN 1 00:06:11.492 #undef SPDK_CONFIG_UNIT_TESTS 00:06:11.492 #undef SPDK_CONFIG_URING 00:06:11.492 #define SPDK_CONFIG_URING_PATH 00:06:11.492 #undef SPDK_CONFIG_URING_ZNS 00:06:11.492 #undef SPDK_CONFIG_USDT 00:06:11.492 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:11.492 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:11.492 #define SPDK_CONFIG_VFIO_USER 1 00:06:11.492 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:11.492 #define SPDK_CONFIG_VHOST 1 00:06:11.492 #define SPDK_CONFIG_VIRTIO 1 00:06:11.492 #undef SPDK_CONFIG_VTUNE 00:06:11.492 #define SPDK_CONFIG_VTUNE_DIR 00:06:11.492 #define SPDK_CONFIG_WERROR 1 00:06:11.492 #define SPDK_CONFIG_WPDK_DIR 00:06:11.492 #undef SPDK_CONFIG_XNVME 00:06:11.492 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:11.492 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.493 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1018400 ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1018400 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ULtsoZ 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ULtsoZ/tests/target /tmp/spdk.ULtsoZ 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39441817600 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5641494528 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22538280960 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541107200 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=548864 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:11.494 * Looking for test storage... 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=39441817600 00:06:11.494 16:55:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7856087040 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.494 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.495 16:55:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:14.025 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:14.026 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:14.026 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:14.026 Found net devices under 0000:84:00.0: cvl_0_0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:14.026 Found net devices under 0000:84:00.1: cvl_0_1 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:14.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:06:14.026 00:06:14.026 --- 10.0.0.2 ping statistics --- 00:06:14.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.026 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:06:14.026 00:06:14.026 --- 10.0.0.1 ping statistics --- 00:06:14.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.026 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.026 ************************************ 00:06:14.026 START TEST nvmf_filesystem_no_in_capsule 00:06:14.026 ************************************ 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1020047 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1020047 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1020047 ']' 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.026 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.027 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.027 [2024-07-12 16:55:13.467374] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:06:14.027 [2024-07-12 16:55:13.467472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.027 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.027 [2024-07-12 16:55:13.531251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.027 [2024-07-12 16:55:13.634481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.027 [2024-07-12 16:55:13.634545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.027 [2024-07-12 16:55:13.634569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.027 [2024-07-12 16:55:13.634580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.027 [2024-07-12 16:55:13.634589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.027 [2024-07-12 16:55:13.634668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.027 [2024-07-12 16:55:13.634731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.027 [2024-07-12 16:55:13.634804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.027 [2024-07-12 16:55:13.634801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.285 [2024-07-12 16:55:13.799747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.285 Malloc1 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.285 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.542 [2024-07-12 16:55:13.990935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.542 16:55:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:14.542 { 00:06:14.542 "name": "Malloc1", 00:06:14.542 "aliases": [ 00:06:14.542 "f4b88a05-4a1b-42b9-9fac-a0d5037ab763" 00:06:14.542 ], 00:06:14.542 "product_name": "Malloc disk", 00:06:14.542 "block_size": 512, 00:06:14.542 "num_blocks": 1048576, 00:06:14.542 "uuid": "f4b88a05-4a1b-42b9-9fac-a0d5037ab763", 00:06:14.542 "assigned_rate_limits": { 00:06:14.542 "rw_ios_per_sec": 0, 00:06:14.542 "rw_mbytes_per_sec": 0, 00:06:14.542 "r_mbytes_per_sec": 0, 00:06:14.542 "w_mbytes_per_sec": 0 00:06:14.542 }, 00:06:14.542 "claimed": true, 00:06:14.542 "claim_type": "exclusive_write", 00:06:14.542 "zoned": false, 00:06:14.542 "supported_io_types": { 00:06:14.542 "read": true, 00:06:14.542 "write": true, 00:06:14.542 "unmap": true, 00:06:14.542 "flush": true, 00:06:14.542 "reset": true, 00:06:14.542 "nvme_admin": false, 00:06:14.542 "nvme_io": false, 00:06:14.542 "nvme_io_md": false, 00:06:14.542 "write_zeroes": true, 00:06:14.542 "zcopy": true, 00:06:14.542 "get_zone_info": false, 00:06:14.542 "zone_management": false, 00:06:14.542 "zone_append": false, 00:06:14.542 "compare": false, 00:06:14.542 "compare_and_write": false, 00:06:14.542 "abort": true, 00:06:14.542 "seek_hole": false, 00:06:14.542 "seek_data": false, 00:06:14.542 "copy": true, 00:06:14.542 "nvme_iov_md": false 00:06:14.542 }, 00:06:14.542 "memory_domains": [ 00:06:14.542 { 00:06:14.542 "dma_device_id": "system", 00:06:14.542 "dma_device_type": 1 00:06:14.542 }, 00:06:14.542 { 00:06:14.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.542 "dma_device_type": 2 00:06:14.542 } 00:06:14.542 ], 00:06:14.542 "driver_specific": {} 00:06:14.542 } 00:06:14.542 ]' 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:14.542 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:15.107 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:15.107 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:15.107 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:15.107 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:15.107 16:55:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:17.628 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:17.628 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:17.628 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:17.629 16:55:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:17.629 16:55:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:18.562 16:55:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.495 ************************************ 00:06:19.495 START TEST filesystem_ext4 00:06:19.495 ************************************ 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:19.495 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:19.495 mke2fs 1.46.5 (30-Dec-2021) 00:06:19.752 Discarding device blocks: 0/522240 done 00:06:19.752 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:19.752 Filesystem UUID: a804018a-e20e-4884-bd7d-3b95a77fa244 00:06:19.752 Superblock backups stored on blocks: 00:06:19.752 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:19.752 00:06:19.752 Allocating group tables: 0/64 done 00:06:19.752 Writing inode tables: 0/64 done 00:06:20.267 Creating journal (8192 blocks): done 00:06:20.267 Writing superblocks and filesystem accounting information: 0/64 done 00:06:20.267 00:06:20.267 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:20.267 16:55:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1020047 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:20.524 00:06:20.524 real 0m0.998s 00:06:20.524 user 0m0.020s 00:06:20.524 sys 0m0.053s 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:20.524 ************************************ 00:06:20.524 END TEST filesystem_ext4 00:06:20.524 ************************************ 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.524 ************************************ 00:06:20.524 START TEST filesystem_btrfs 00:06:20.524 ************************************ 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:20.524 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:20.525 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:20.525 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:20.525 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:20.525 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:20.525 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:20.781 btrfs-progs v6.6.2 00:06:20.781 See https://btrfs.readthedocs.io for more information. 00:06:20.781 00:06:20.781 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:20.781 NOTE: several default settings have changed in version 5.15, please make sure 00:06:20.781 this does not affect your deployments: 00:06:20.781 - DUP for metadata (-m dup) 00:06:20.781 - enabled no-holes (-O no-holes) 00:06:20.781 - enabled free-space-tree (-R free-space-tree) 00:06:20.781 00:06:20.781 Label: (null) 00:06:20.781 UUID: 7942153e-9b15-47db-884d-727bbbd9f868 00:06:20.782 Node size: 16384 00:06:20.782 Sector size: 4096 00:06:20.782 Filesystem size: 510.00MiB 00:06:20.782 Block group profiles: 00:06:20.782 Data: single 8.00MiB 00:06:20.782 Metadata: DUP 32.00MiB 00:06:20.782 System: DUP 8.00MiB 00:06:20.782 SSD detected: yes 00:06:20.782 Zoned device: no 00:06:20.782 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:20.782 Runtime features: free-space-tree 00:06:20.782 Checksum: crc32c 00:06:20.782 Number of devices: 1 00:06:20.782 Devices: 00:06:20.782 ID SIZE PATH 00:06:20.782 1 510.00MiB /dev/nvme0n1p1 00:06:20.782 00:06:20.782 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:20.782 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1020047 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.038 00:06:21.038 real 0m0.501s 00:06:21.038 user 0m0.008s 00:06:21.038 sys 0m0.121s 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:21.038 ************************************ 00:06:21.038 END TEST filesystem_btrfs 00:06:21.038 ************************************ 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.038 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.038 ************************************ 00:06:21.038 START TEST filesystem_xfs 00:06:21.038 ************************************ 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:21.039 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:21.296 16:55:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:21.296 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:21.296 = sectsz=512 attr=2, projid32bit=1 00:06:21.296 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:21.296 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:21.296 data = bsize=4096 blocks=130560, imaxpct=25 00:06:21.296 = sunit=0 swidth=0 blks 00:06:21.296 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:21.296 log =internal log bsize=4096 blocks=16384, version=2 00:06:21.296 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:21.296 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:22.225 Discarding blocks...Done. 00:06:22.225 16:55:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:22.225 16:55:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.123 00:06:24.123 real 0m2.623s 00:06:24.123 user 0m0.017s 00:06:24.123 sys 0m0.058s 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:24.123 ************************************ 00:06:24.123 END TEST filesystem_xfs 00:06:24.123 ************************************ 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:24.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1020047 ']' 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020047' 00:06:24.123 killing process with pid 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1020047 00:06:24.123 16:55:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1020047 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:24.689 00:06:24.689 real 0m10.815s 00:06:24.689 user 0m41.291s 00:06:24.689 sys 0m1.750s 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 ************************************ 00:06:24.689 END TEST nvmf_filesystem_no_in_capsule 00:06:24.689 ************************************ 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 ************************************ 00:06:24.689 START TEST nvmf_filesystem_in_capsule 00:06:24.689 ************************************ 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1021474 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1021474 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1021474 ']' 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.689 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.690 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.690 [2024-07-12 16:55:24.338403] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:06:24.690 [2024-07-12 16:55:24.338473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.690 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.947 [2024-07-12 16:55:24.403594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.947 [2024-07-12 16:55:24.509333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.947 [2024-07-12 16:55:24.509376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.947 [2024-07-12 16:55:24.509400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.947 [2024-07-12 16:55:24.509410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.947 [2024-07-12 16:55:24.509420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.947 [2024-07-12 16:55:24.509511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.947 [2024-07-12 16:55:24.509577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.947 [2024-07-12 16:55:24.509633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.947 [2024-07-12 16:55:24.509636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.947 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.947 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:24.947 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 [2024-07-12 16:55:24.669604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 [2024-07-12 16:55:24.854210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.205 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:25.205 { 00:06:25.205 "name": "Malloc1", 00:06:25.205 "aliases": [ 00:06:25.205 "1e1ea13d-2e41-4ac5-828d-0a8bee96ebaa" 00:06:25.205 ], 00:06:25.205 "product_name": "Malloc disk", 00:06:25.205 "block_size": 512, 00:06:25.205 "num_blocks": 1048576, 00:06:25.205 "uuid": "1e1ea13d-2e41-4ac5-828d-0a8bee96ebaa", 00:06:25.205 "assigned_rate_limits": { 00:06:25.205 "rw_ios_per_sec": 0, 00:06:25.205 "rw_mbytes_per_sec": 0, 00:06:25.205 "r_mbytes_per_sec": 0, 00:06:25.205 "w_mbytes_per_sec": 0 00:06:25.205 }, 00:06:25.205 "claimed": true, 00:06:25.205 "claim_type": "exclusive_write", 00:06:25.205 "zoned": false, 00:06:25.205 "supported_io_types": { 00:06:25.205 "read": true, 00:06:25.205 "write": true, 00:06:25.206 "unmap": true, 00:06:25.206 "flush": true, 00:06:25.206 "reset": true, 00:06:25.206 "nvme_admin": false, 00:06:25.206 "nvme_io": false, 00:06:25.206 "nvme_io_md": false, 00:06:25.206 "write_zeroes": true, 00:06:25.206 "zcopy": true, 00:06:25.206 "get_zone_info": false, 00:06:25.206 "zone_management": false, 00:06:25.206 "zone_append": false, 00:06:25.206 "compare": false, 00:06:25.206 "compare_and_write": false, 00:06:25.206 "abort": true, 00:06:25.206 "seek_hole": false, 00:06:25.206 "seek_data": false, 00:06:25.206 "copy": true, 00:06:25.206 "nvme_iov_md": false 00:06:25.206 }, 00:06:25.206 "memory_domains": [ 00:06:25.206 { 00:06:25.206 "dma_device_id": "system", 00:06:25.206 "dma_device_type": 1 00:06:25.206 }, 00:06:25.206 { 00:06:25.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.206 "dma_device_type": 2 00:06:25.206 } 00:06:25.206 ], 00:06:25.206 "driver_specific": {} 00:06:25.206 } 00:06:25.206 ]' 00:06:25.206 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:25.463 16:55:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:26.029 16:55:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:26.029 16:55:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:26.029 16:55:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:26.029 16:55:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:26.029 16:55:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:28.570 16:55:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:29.135 16:55:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.154 ************************************ 00:06:30.154 START TEST filesystem_in_capsule_ext4 00:06:30.154 ************************************ 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:30.154 16:55:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:30.154 mke2fs 1.46.5 (30-Dec-2021) 00:06:30.446 Discarding device blocks: 0/522240 done 00:06:30.446 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:30.446 Filesystem UUID: eab4231e-6e68-4c43-81dd-886adceefd89 00:06:30.446 Superblock backups stored on blocks: 00:06:30.446 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:30.446 00:06:30.446 Allocating group tables: 0/64 done 00:06:30.446 Writing inode tables: 0/64 done 00:06:30.446 Creating journal (8192 blocks): done 00:06:30.446 Writing superblocks and filesystem accounting information: 0/64 done 00:06:30.446 00:06:30.446 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:30.446 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1021474 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.704 00:06:30.704 real 0m0.620s 00:06:30.704 user 0m0.018s 00:06:30.704 sys 0m0.047s 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.704 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:30.704 ************************************ 00:06:30.704 END TEST filesystem_in_capsule_ext4 00:06:30.704 ************************************ 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.962 ************************************ 00:06:30.962 START TEST filesystem_in_capsule_btrfs 00:06:30.962 ************************************ 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:30.962 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:31.220 btrfs-progs v6.6.2 00:06:31.220 See https://btrfs.readthedocs.io for more information. 00:06:31.220 00:06:31.220 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:31.220 NOTE: several default settings have changed in version 5.15, please make sure 00:06:31.220 this does not affect your deployments: 00:06:31.220 - DUP for metadata (-m dup) 00:06:31.220 - enabled no-holes (-O no-holes) 00:06:31.220 - enabled free-space-tree (-R free-space-tree) 00:06:31.220 00:06:31.220 Label: (null) 00:06:31.220 UUID: 2a490b68-c3c4-403b-95fc-5d67d41daa81 00:06:31.220 Node size: 16384 00:06:31.220 Sector size: 4096 00:06:31.220 Filesystem size: 510.00MiB 00:06:31.220 Block group profiles: 00:06:31.220 Data: single 8.00MiB 00:06:31.220 Metadata: DUP 32.00MiB 00:06:31.220 System: DUP 8.00MiB 00:06:31.220 SSD detected: yes 00:06:31.220 Zoned device: no 00:06:31.220 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:31.220 Runtime features: free-space-tree 00:06:31.220 Checksum: crc32c 00:06:31.220 Number of devices: 1 00:06:31.220 Devices: 00:06:31.220 ID SIZE PATH 00:06:31.220 1 510.00MiB /dev/nvme0n1p1 00:06:31.220 00:06:31.220 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:31.220 16:55:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1021474 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.786 00:06:31.786 real 0m0.913s 00:06:31.786 user 0m0.029s 00:06:31.786 sys 0m0.114s 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:31.786 ************************************ 00:06:31.786 END TEST filesystem_in_capsule_btrfs 00:06:31.786 ************************************ 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.786 ************************************ 00:06:31.786 START TEST filesystem_in_capsule_xfs 00:06:31.786 ************************************ 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:31.786 16:55:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:32.045 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:32.045 = sectsz=512 attr=2, projid32bit=1 00:06:32.045 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:32.045 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:32.045 data = bsize=4096 blocks=130560, imaxpct=25 00:06:32.045 = sunit=0 swidth=0 blks 00:06:32.045 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:32.045 log =internal log bsize=4096 blocks=16384, version=2 00:06:32.045 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:32.045 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:32.612 Discarding blocks...Done. 00:06:32.612 16:55:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.612 16:55:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1021474 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:35.145 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:35.145 00:06:35.145 real 0m3.186s 00:06:35.145 user 0m0.014s 00:06:35.145 sys 0m0.065s 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:35.146 ************************************ 00:06:35.146 END TEST filesystem_in_capsule_xfs 00:06:35.146 ************************************ 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:35.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1021474 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1021474 ']' 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1021474 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1021474 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1021474' 00:06:35.146 killing process with pid 1021474 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1021474 00:06:35.146 16:55:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1021474 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:35.715 00:06:35.715 real 0m10.967s 00:06:35.715 user 0m41.926s 00:06:35.715 sys 0m1.738s 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:35.715 ************************************ 00:06:35.715 END TEST nvmf_filesystem_in_capsule 00:06:35.715 ************************************ 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.715 rmmod nvme_tcp 00:06:35.715 rmmod nvme_fabrics 00:06:35.715 rmmod nvme_keyring 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.715 16:55:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.255 16:55:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:38.255 00:06:38.255 real 0m26.514s 00:06:38.255 user 1m24.199s 00:06:38.255 sys 0m5.259s 00:06:38.255 16:55:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.255 16:55:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.255 ************************************ 00:06:38.255 END TEST nvmf_filesystem 00:06:38.255 ************************************ 00:06:38.255 16:55:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:38.255 16:55:37 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:38.255 16:55:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.255 16:55:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.255 16:55:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.255 ************************************ 00:06:38.255 START TEST nvmf_target_discovery 00:06:38.255 ************************************ 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:38.255 * Looking for test storage... 00:06:38.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.255 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.256 16:55:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:40.162 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:40.162 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:40.162 Found net devices under 0000:84:00.0: cvl_0_0 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:40.162 Found net devices under 0000:84:00.1: cvl_0_1 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:40.162 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:40.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:06:40.163 00:06:40.163 --- 10.0.0.2 ping statistics --- 00:06:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.163 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:06:40.163 00:06:40.163 --- 10.0.0.1 ping statistics --- 00:06:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.163 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1024964 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1024964 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1024964 ']' 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.163 16:55:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.422 [2024-07-12 16:55:39.871137] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:06:40.422 [2024-07-12 16:55:39.871235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.422 [2024-07-12 16:55:39.935736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.422 [2024-07-12 16:55:40.044634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.422 [2024-07-12 16:55:40.044694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.422 [2024-07-12 16:55:40.044719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.422 [2024-07-12 16:55:40.044751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.422 [2024-07-12 16:55:40.044762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.422 [2024-07-12 16:55:40.044842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.422 [2024-07-12 16:55:40.044963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.422 [2024-07-12 16:55:40.045012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.422 [2024-07-12 16:55:40.045015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 [2024-07-12 16:55:40.211692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 Null1 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 [2024-07-12 16:55:40.252044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 Null2 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 Null3 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 Null4 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.681 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:06:40.939 00:06:40.939 Discovery Log Number of Records 6, Generation counter 6 00:06:40.939 =====Discovery Log Entry 0====== 00:06:40.939 trtype: tcp 00:06:40.939 adrfam: ipv4 00:06:40.939 subtype: current discovery subsystem 00:06:40.939 treq: not required 00:06:40.939 portid: 0 00:06:40.939 trsvcid: 4420 00:06:40.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:40.939 traddr: 10.0.0.2 00:06:40.939 eflags: explicit discovery connections, duplicate discovery information 00:06:40.939 sectype: none 00:06:40.939 =====Discovery Log Entry 1====== 00:06:40.939 trtype: tcp 00:06:40.939 adrfam: ipv4 00:06:40.939 subtype: nvme subsystem 00:06:40.939 treq: not required 00:06:40.939 portid: 0 00:06:40.939 trsvcid: 4420 00:06:40.939 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:40.939 traddr: 10.0.0.2 00:06:40.939 eflags: none 00:06:40.939 sectype: none 00:06:40.939 =====Discovery Log Entry 2====== 00:06:40.939 trtype: tcp 00:06:40.939 adrfam: ipv4 00:06:40.939 subtype: nvme subsystem 00:06:40.940 treq: not required 00:06:40.940 portid: 0 00:06:40.940 trsvcid: 4420 00:06:40.940 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:40.940 traddr: 10.0.0.2 00:06:40.940 eflags: none 00:06:40.940 sectype: none 00:06:40.940 =====Discovery Log Entry 3====== 00:06:40.940 trtype: tcp 00:06:40.940 adrfam: ipv4 00:06:40.940 subtype: nvme subsystem 00:06:40.940 treq: not required 00:06:40.940 portid: 0 00:06:40.940 trsvcid: 4420 00:06:40.940 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:40.940 traddr: 10.0.0.2 00:06:40.940 eflags: none 00:06:40.940 sectype: none 00:06:40.940 =====Discovery Log Entry 4====== 00:06:40.940 trtype: tcp 00:06:40.940 adrfam: ipv4 00:06:40.940 subtype: nvme subsystem 00:06:40.940 treq: not required 00:06:40.940 portid: 0 00:06:40.940 trsvcid: 4420 00:06:40.940 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:40.940 traddr: 10.0.0.2 00:06:40.940 eflags: none 00:06:40.940 sectype: none 00:06:40.940 =====Discovery Log Entry 5====== 00:06:40.940 trtype: tcp 00:06:40.940 adrfam: ipv4 00:06:40.940 subtype: discovery subsystem referral 00:06:40.940 treq: not required 00:06:40.940 portid: 0 00:06:40.940 trsvcid: 4430 00:06:40.940 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:40.940 traddr: 10.0.0.2 00:06:40.940 eflags: none 00:06:40.940 sectype: none 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:40.940 Perform nvmf subsystem discovery via RPC 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 [ 00:06:40.940 { 00:06:40.940 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:40.940 "subtype": "Discovery", 00:06:40.940 "listen_addresses": [ 00:06:40.940 { 00:06:40.940 "trtype": "TCP", 00:06:40.940 "adrfam": "IPv4", 00:06:40.940 "traddr": "10.0.0.2", 00:06:40.940 "trsvcid": "4420" 00:06:40.940 } 00:06:40.940 ], 00:06:40.940 "allow_any_host": true, 00:06:40.940 "hosts": [] 00:06:40.940 }, 00:06:40.940 { 00:06:40.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:40.940 "subtype": "NVMe", 00:06:40.940 "listen_addresses": [ 00:06:40.940 { 00:06:40.940 "trtype": "TCP", 00:06:40.940 "adrfam": "IPv4", 00:06:40.940 "traddr": "10.0.0.2", 00:06:40.940 "trsvcid": "4420" 00:06:40.940 } 00:06:40.940 ], 00:06:40.940 "allow_any_host": true, 00:06:40.940 "hosts": [], 00:06:40.940 "serial_number": "SPDK00000000000001", 00:06:40.940 "model_number": "SPDK bdev Controller", 00:06:40.940 "max_namespaces": 32, 00:06:40.940 "min_cntlid": 1, 00:06:40.940 "max_cntlid": 65519, 00:06:40.940 "namespaces": [ 00:06:40.940 { 00:06:40.940 "nsid": 1, 00:06:40.940 "bdev_name": "Null1", 00:06:40.940 "name": "Null1", 00:06:40.940 "nguid": "A4214FA6DFDD4B7A9B61D256DB1BB8C0", 00:06:40.940 "uuid": "a4214fa6-dfdd-4b7a-9b61-d256db1bb8c0" 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 }, 00:06:40.940 { 00:06:40.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:40.940 "subtype": "NVMe", 00:06:40.940 "listen_addresses": [ 00:06:40.940 { 00:06:40.940 "trtype": "TCP", 00:06:40.940 "adrfam": "IPv4", 00:06:40.940 "traddr": "10.0.0.2", 00:06:40.940 "trsvcid": "4420" 00:06:40.940 } 00:06:40.940 ], 00:06:40.940 "allow_any_host": true, 00:06:40.940 "hosts": [], 00:06:40.940 "serial_number": "SPDK00000000000002", 00:06:40.940 "model_number": "SPDK bdev Controller", 00:06:40.940 "max_namespaces": 32, 00:06:40.940 "min_cntlid": 1, 00:06:40.940 "max_cntlid": 65519, 00:06:40.940 "namespaces": [ 00:06:40.940 { 00:06:40.940 "nsid": 1, 00:06:40.940 "bdev_name": "Null2", 00:06:40.940 "name": "Null2", 00:06:40.940 "nguid": "997BD41D06B049B888543E5E7FF2CDF6", 00:06:40.940 "uuid": "997bd41d-06b0-49b8-8854-3e5e7ff2cdf6" 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 }, 00:06:40.940 { 00:06:40.940 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:40.940 "subtype": "NVMe", 00:06:40.940 "listen_addresses": [ 00:06:40.940 { 00:06:40.940 "trtype": "TCP", 00:06:40.940 "adrfam": "IPv4", 00:06:40.940 "traddr": "10.0.0.2", 00:06:40.940 "trsvcid": "4420" 00:06:40.940 } 00:06:40.940 ], 00:06:40.940 "allow_any_host": true, 00:06:40.940 "hosts": [], 00:06:40.940 "serial_number": "SPDK00000000000003", 00:06:40.940 "model_number": "SPDK bdev Controller", 00:06:40.940 "max_namespaces": 32, 00:06:40.940 "min_cntlid": 1, 00:06:40.940 "max_cntlid": 65519, 00:06:40.940 "namespaces": [ 00:06:40.940 { 00:06:40.940 "nsid": 1, 00:06:40.940 "bdev_name": "Null3", 00:06:40.940 "name": "Null3", 00:06:40.940 "nguid": "A70EB343C1A04220AFBC7076F06A191A", 00:06:40.940 "uuid": "a70eb343-c1a0-4220-afbc-7076f06a191a" 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 }, 00:06:40.940 { 00:06:40.940 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:40.940 "subtype": "NVMe", 00:06:40.940 "listen_addresses": [ 00:06:40.940 { 00:06:40.940 "trtype": "TCP", 00:06:40.940 "adrfam": "IPv4", 00:06:40.940 "traddr": "10.0.0.2", 00:06:40.940 "trsvcid": "4420" 00:06:40.940 } 00:06:40.940 ], 00:06:40.940 "allow_any_host": true, 00:06:40.940 "hosts": [], 00:06:40.940 "serial_number": "SPDK00000000000004", 00:06:40.940 "model_number": "SPDK bdev Controller", 00:06:40.940 "max_namespaces": 32, 00:06:40.940 "min_cntlid": 1, 00:06:40.940 "max_cntlid": 65519, 00:06:40.940 "namespaces": [ 00:06:40.940 { 00:06:40.940 "nsid": 1, 00:06:40.940 "bdev_name": "Null4", 00:06:40.940 "name": "Null4", 00:06:40.940 "nguid": "D7F654A97C3744E397EFA4D8BED82C67", 00:06:40.940 "uuid": "d7f654a9-7c37-44e3-97ef-a4d8bed82c67" 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 } 00:06:40.940 ] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.940 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.199 rmmod nvme_tcp 00:06:41.199 rmmod nvme_fabrics 00:06:41.199 rmmod nvme_keyring 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1024964 ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1024964 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1024964 ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1024964 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024964 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024964' 00:06:41.199 killing process with pid 1024964 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1024964 00:06:41.199 16:55:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1024964 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.459 16:55:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.373 16:55:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:43.373 00:06:43.373 real 0m5.630s 00:06:43.373 user 0m4.645s 00:06:43.373 sys 0m1.882s 00:06:43.373 16:55:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.373 16:55:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:43.373 ************************************ 00:06:43.373 END TEST nvmf_target_discovery 00:06:43.373 ************************************ 00:06:43.632 16:55:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:43.632 16:55:43 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:43.632 16:55:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:43.632 16:55:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.632 16:55:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.632 ************************************ 00:06:43.632 START TEST nvmf_referrals 00:06:43.632 ************************************ 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:43.632 * Looking for test storage... 00:06:43.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.632 16:55:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.633 16:55:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:46.167 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:46.167 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:46.167 Found net devices under 0000:84:00.0: cvl_0_0 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:46.167 Found net devices under 0000:84:00.1: cvl_0_1 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:46.167 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:46.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:06:46.168 00:06:46.168 --- 10.0.0.2 ping statistics --- 00:06:46.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.168 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:46.168 00:06:46.168 --- 10.0.0.1 ping statistics --- 00:06:46.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.168 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1027069 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1027069 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1027069 ']' 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.168 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.168 [2024-07-12 16:55:45.585807] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:06:46.168 [2024-07-12 16:55:45.585877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.168 [2024-07-12 16:55:45.651175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.168 [2024-07-12 16:55:45.764936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.168 [2024-07-12 16:55:45.764998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.168 [2024-07-12 16:55:45.765013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.168 [2024-07-12 16:55:45.765024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.168 [2024-07-12 16:55:45.765034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.168 [2024-07-12 16:55:45.765113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.168 [2024-07-12 16:55:45.765181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.168 [2024-07-12 16:55:45.765214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.168 [2024-07-12 16:55:45.765216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 [2024-07-12 16:55:45.928634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 [2024-07-12 16:55:45.940876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.426 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.684 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.685 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.943 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.201 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:47.202 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.202 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.459 16:55:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.459 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.717 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.975 rmmod nvme_tcp 00:06:47.975 rmmod nvme_fabrics 00:06:47.975 rmmod nvme_keyring 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1027069 ']' 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1027069 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1027069 ']' 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1027069 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027069 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027069' 00:06:47.975 killing process with pid 1027069 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1027069 00:06:47.975 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1027069 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.235 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.236 16:55:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.145 16:55:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.145 00:06:50.145 real 0m6.703s 00:06:50.145 user 0m9.389s 00:06:50.145 sys 0m2.255s 00:06:50.145 16:55:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.145 16:55:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.145 ************************************ 00:06:50.145 END TEST nvmf_referrals 00:06:50.145 ************************************ 00:06:50.145 16:55:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:50.145 16:55:49 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.145 16:55:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.145 16:55:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.145 16:55:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.404 ************************************ 00:06:50.404 START TEST nvmf_connect_disconnect 00:06:50.404 ************************************ 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.404 * Looking for test storage... 00:06:50.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.404 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.405 16:55:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:52.932 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:52.932 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:52.932 Found net devices under 0000:84:00.0: cvl_0_0 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.932 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:52.933 Found net devices under 0000:84:00.1: cvl_0_1 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:06:52.933 00:06:52.933 --- 10.0.0.2 ping statistics --- 00:06:52.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.933 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:06:52.933 00:06:52.933 --- 10.0.0.1 ping statistics --- 00:06:52.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.933 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1029377 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1029377 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1029377 ']' 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.933 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.933 [2024-07-12 16:55:52.360185] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:06:52.933 [2024-07-12 16:55:52.360269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.933 [2024-07-12 16:55:52.423377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.933 [2024-07-12 16:55:52.524924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.933 [2024-07-12 16:55:52.524979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.933 [2024-07-12 16:55:52.525007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.933 [2024-07-12 16:55:52.525019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.933 [2024-07-12 16:55:52.525028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.933 [2024-07-12 16:55:52.525110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.933 [2024-07-12 16:55:52.525176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.933 [2024-07-12 16:55:52.525244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.933 [2024-07-12 16:55:52.525240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 [2024-07-12 16:55:52.680691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.191 [2024-07-12 16:55:52.731425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:53.191 16:55:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:56.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.367 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.368 rmmod nvme_tcp 00:07:07.368 rmmod nvme_fabrics 00:07:07.368 rmmod nvme_keyring 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1029377 ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1029377 ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1029377' 00:07:07.368 killing process with pid 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1029377 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.368 16:56:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.273 16:56:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:09.273 00:07:09.273 real 0m18.914s 00:07:09.273 user 0m56.245s 00:07:09.273 sys 0m3.411s 00:07:09.273 16:56:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.273 16:56:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:09.273 ************************************ 00:07:09.273 END TEST nvmf_connect_disconnect 00:07:09.273 ************************************ 00:07:09.273 16:56:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:09.273 16:56:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:09.273 16:56:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:09.273 16:56:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.273 16:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.273 ************************************ 00:07:09.273 START TEST nvmf_multitarget 00:07:09.273 ************************************ 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:09.273 * Looking for test storage... 00:07:09.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.273 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.274 16:56:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.807 16:56:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:11.807 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:11.807 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:11.807 Found net devices under 0000:84:00.0: cvl_0_0 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:11.807 Found net devices under 0000:84:00.1: cvl_0_1 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.807 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:11.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:07:11.808 00:07:11.808 --- 10.0.0.2 ping statistics --- 00:07:11.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.808 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:11.808 00:07:11.808 --- 10.0.0.1 ping statistics --- 00:07:11.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.808 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1033108 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1033108 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1033108 ']' 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.808 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:11.808 [2024-07-12 16:56:11.226486] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:07:11.808 [2024-07-12 16:56:11.226569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.808 [2024-07-12 16:56:11.290137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.808 [2024-07-12 16:56:11.403402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.808 [2024-07-12 16:56:11.403462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.808 [2024-07-12 16:56:11.403476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.808 [2024-07-12 16:56:11.403488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.808 [2024-07-12 16:56:11.403498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.808 [2024-07-12 16:56:11.403552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.808 [2024-07-12 16:56:11.403609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.808 [2024-07-12 16:56:11.403675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.808 [2024-07-12 16:56:11.403678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:12.067 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:12.325 "nvmf_tgt_1" 00:07:12.325 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:12.325 "nvmf_tgt_2" 00:07:12.325 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:12.325 16:56:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:12.582 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:12.582 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:12.582 true 00:07:12.582 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:12.840 true 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:12.840 rmmod nvme_tcp 00:07:12.840 rmmod nvme_fabrics 00:07:12.840 rmmod nvme_keyring 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1033108 ']' 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1033108 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1033108 ']' 00:07:12.840 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1033108 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1033108 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1033108' 00:07:12.841 killing process with pid 1033108 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1033108 00:07:12.841 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1033108 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.099 16:56:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.640 16:56:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.640 00:07:15.640 real 0m5.966s 00:07:15.640 user 0m6.797s 00:07:15.640 sys 0m2.005s 00:07:15.640 16:56:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.640 16:56:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:15.640 ************************************ 00:07:15.640 END TEST nvmf_multitarget 00:07:15.640 ************************************ 00:07:15.640 16:56:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:15.640 16:56:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:15.640 16:56:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.640 16:56:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.640 16:56:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.640 ************************************ 00:07:15.640 START TEST nvmf_rpc 00:07:15.640 ************************************ 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:15.640 * Looking for test storage... 00:07:15.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.640 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.641 16:56:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:17.545 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:17.545 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.545 16:56:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:17.545 Found net devices under 0000:84:00.0: cvl_0_0 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:17.545 Found net devices under 0000:84:00.1: cvl_0_1 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:17.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:07:17.545 00:07:17.545 --- 10.0.0.2 ping statistics --- 00:07:17.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.545 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:17.545 00:07:17.545 --- 10.0.0.1 ping statistics --- 00:07:17.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.545 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1035274 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1035274 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1035274 ']' 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.545 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.546 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.546 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.546 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.546 [2024-07-12 16:56:17.210647] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:07:17.546 [2024-07-12 16:56:17.210751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.804 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.804 [2024-07-12 16:56:17.277808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.804 [2024-07-12 16:56:17.389641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.804 [2024-07-12 16:56:17.389700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.804 [2024-07-12 16:56:17.389714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.804 [2024-07-12 16:56:17.389725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.804 [2024-07-12 16:56:17.389735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.804 [2024-07-12 16:56:17.389806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.804 [2024-07-12 16:56:17.389866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.804 [2024-07-12 16:56:17.389933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.804 [2024-07-12 16:56:17.389936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:18.063 "tick_rate": 2700000000, 00:07:18.063 "poll_groups": [ 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_000", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [] 00:07:18.063 }, 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_001", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [] 00:07:18.063 }, 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_002", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [] 00:07:18.063 }, 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_003", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [] 00:07:18.063 } 00:07:18.063 ] 00:07:18.063 }' 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 [2024-07-12 16:56:17.649179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.063 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:18.063 "tick_rate": 2700000000, 00:07:18.063 "poll_groups": [ 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_000", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [ 00:07:18.063 { 00:07:18.063 "trtype": "TCP" 00:07:18.063 } 00:07:18.063 ] 00:07:18.063 }, 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_001", 00:07:18.063 "admin_qpairs": 0, 00:07:18.063 "io_qpairs": 0, 00:07:18.063 "current_admin_qpairs": 0, 00:07:18.063 "current_io_qpairs": 0, 00:07:18.063 "pending_bdev_io": 0, 00:07:18.063 "completed_nvme_io": 0, 00:07:18.063 "transports": [ 00:07:18.063 { 00:07:18.063 "trtype": "TCP" 00:07:18.063 } 00:07:18.063 ] 00:07:18.063 }, 00:07:18.063 { 00:07:18.063 "name": "nvmf_tgt_poll_group_002", 00:07:18.064 "admin_qpairs": 0, 00:07:18.064 "io_qpairs": 0, 00:07:18.064 "current_admin_qpairs": 0, 00:07:18.064 "current_io_qpairs": 0, 00:07:18.064 "pending_bdev_io": 0, 00:07:18.064 "completed_nvme_io": 0, 00:07:18.064 "transports": [ 00:07:18.064 { 00:07:18.064 "trtype": "TCP" 00:07:18.064 } 00:07:18.064 ] 00:07:18.064 }, 00:07:18.064 { 00:07:18.064 "name": "nvmf_tgt_poll_group_003", 00:07:18.064 "admin_qpairs": 0, 00:07:18.064 "io_qpairs": 0, 00:07:18.064 "current_admin_qpairs": 0, 00:07:18.064 "current_io_qpairs": 0, 00:07:18.064 "pending_bdev_io": 0, 00:07:18.064 "completed_nvme_io": 0, 00:07:18.064 "transports": [ 00:07:18.064 { 00:07:18.064 "trtype": "TCP" 00:07:18.064 } 00:07:18.064 ] 00:07:18.064 } 00:07:18.064 ] 00:07:18.064 }' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.064 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.322 Malloc1 00:07:18.322 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 [2024-07-12 16:56:17.798826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:18.323 [2024-07-12 16:56:17.821274] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:18.323 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:18.323 could not add new controller: failed to write to nvme-fabrics device 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.323 16:56:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.888 16:56:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.888 16:56:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:18.888 16:56:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.888 16:56:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:18.888 16:56:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.437 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.438 [2024-07-12 16:56:20.651151] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:21.438 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:21.438 could not add new controller: failed to write to nvme-fabrics device 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.438 16:56:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.694 16:56:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.694 16:56:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.694 16:56:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.694 16:56:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.694 16:56:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.219 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 [2024-07-12 16:56:23.474845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.220 16:56:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.784 16:56:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.784 16:56:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:24.784 16:56:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.784 16:56:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:24.784 16:56:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 [2024-07-12 16:56:26.291131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.681 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.614 16:56:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.614 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:27.614 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.614 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:27.614 16:56:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.511 16:56:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.511 16:56:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.511 16:56:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 [2024-07-12 16:56:29.160571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.511 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.478 16:56:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.478 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:30.478 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.478 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:30.478 16:56:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.397 [2024-07-12 16:56:31.890969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.397 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.398 16:56:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.962 16:56:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.962 16:56:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.962 16:56:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.962 16:56:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.962 16:56:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.857 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 [2024-07-12 16:56:34.669849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.115 16:56:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.681 16:56:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.681 16:56:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:35.681 16:56:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.681 16:56:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:35.681 16:56:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 [2024-07-12 16:56:37.448801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 [2024-07-12 16:56:37.496818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.208 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 [2024-07-12 16:56:37.544993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 [2024-07-12 16:56:37.593143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 [2024-07-12 16:56:37.641312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.209 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:38.209 "tick_rate": 2700000000, 00:07:38.209 "poll_groups": [ 00:07:38.209 { 00:07:38.209 "name": "nvmf_tgt_poll_group_000", 00:07:38.209 "admin_qpairs": 2, 00:07:38.209 "io_qpairs": 84, 00:07:38.209 "current_admin_qpairs": 0, 00:07:38.209 "current_io_qpairs": 0, 00:07:38.209 "pending_bdev_io": 0, 00:07:38.209 "completed_nvme_io": 183, 00:07:38.209 "transports": [ 00:07:38.209 { 00:07:38.209 "trtype": "TCP" 00:07:38.209 } 00:07:38.209 ] 00:07:38.209 }, 00:07:38.209 { 00:07:38.209 "name": "nvmf_tgt_poll_group_001", 00:07:38.209 "admin_qpairs": 2, 00:07:38.209 "io_qpairs": 84, 00:07:38.209 "current_admin_qpairs": 0, 00:07:38.209 "current_io_qpairs": 0, 00:07:38.209 "pending_bdev_io": 0, 00:07:38.209 "completed_nvme_io": 135, 00:07:38.209 "transports": [ 00:07:38.209 { 00:07:38.209 "trtype": "TCP" 00:07:38.209 } 00:07:38.209 ] 00:07:38.209 }, 00:07:38.209 { 00:07:38.210 "name": "nvmf_tgt_poll_group_002", 00:07:38.210 "admin_qpairs": 1, 00:07:38.210 "io_qpairs": 84, 00:07:38.210 "current_admin_qpairs": 0, 00:07:38.210 "current_io_qpairs": 0, 00:07:38.210 "pending_bdev_io": 0, 00:07:38.210 "completed_nvme_io": 156, 00:07:38.210 "transports": [ 00:07:38.210 { 00:07:38.210 "trtype": "TCP" 00:07:38.210 } 00:07:38.210 ] 00:07:38.210 }, 00:07:38.210 { 00:07:38.210 "name": "nvmf_tgt_poll_group_003", 00:07:38.210 "admin_qpairs": 2, 00:07:38.210 "io_qpairs": 84, 00:07:38.210 "current_admin_qpairs": 0, 00:07:38.210 "current_io_qpairs": 0, 00:07:38.210 "pending_bdev_io": 0, 00:07:38.210 "completed_nvme_io": 212, 00:07:38.210 "transports": [ 00:07:38.210 { 00:07:38.210 "trtype": "TCP" 00:07:38.210 } 00:07:38.210 ] 00:07:38.210 } 00:07:38.210 ] 00:07:38.210 }' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:38.210 rmmod nvme_tcp 00:07:38.210 rmmod nvme_fabrics 00:07:38.210 rmmod nvme_keyring 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1035274 ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1035274 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1035274 ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1035274 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035274 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035274' 00:07:38.210 killing process with pid 1035274 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1035274 00:07:38.210 16:56:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1035274 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.469 16:56:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.004 16:56:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.004 00:07:41.004 real 0m25.339s 00:07:41.004 user 1m21.925s 00:07:41.004 sys 0m4.317s 00:07:41.004 16:56:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.004 16:56:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.004 ************************************ 00:07:41.004 END TEST nvmf_rpc 00:07:41.004 ************************************ 00:07:41.004 16:56:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:41.004 16:56:40 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:41.004 16:56:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:41.004 16:56:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.004 16:56:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.004 ************************************ 00:07:41.004 START TEST nvmf_invalid 00:07:41.004 ************************************ 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:41.004 * Looking for test storage... 00:07:41.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.004 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.005 16:56:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.909 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.909 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.909 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:42.910 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:42.910 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:42.910 Found net devices under 0000:84:00.0: cvl_0_0 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:42.910 Found net devices under 0000:84:00.1: cvl_0_1 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:07:42.910 00:07:42.910 --- 10.0.0.2 ping statistics --- 00:07:42.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.910 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:07:42.910 00:07:42.910 --- 10.0.0.1 ping statistics --- 00:07:42.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.910 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.910 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1039788 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1039788 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1039788 ']' 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.169 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:43.169 [2024-07-12 16:56:42.658620] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:07:43.169 [2024-07-12 16:56:42.658702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.169 [2024-07-12 16:56:42.728477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.169 [2024-07-12 16:56:42.847212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.169 [2024-07-12 16:56:42.847267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.169 [2024-07-12 16:56:42.847281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.169 [2024-07-12 16:56:42.847293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.169 [2024-07-12 16:56:42.847303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.169 [2024-07-12 16:56:42.847385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.169 [2024-07-12 16:56:42.847408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.169 [2024-07-12 16:56:42.847478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.169 [2024-07-12 16:56:42.847480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:43.427 16:56:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17831 00:07:43.684 [2024-07-12 16:56:43.267508] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:43.684 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:43.684 { 00:07:43.684 "nqn": "nqn.2016-06.io.spdk:cnode17831", 00:07:43.684 "tgt_name": "foobar", 00:07:43.684 "method": "nvmf_create_subsystem", 00:07:43.684 "req_id": 1 00:07:43.684 } 00:07:43.684 Got JSON-RPC error response 00:07:43.684 response: 00:07:43.684 { 00:07:43.684 "code": -32603, 00:07:43.684 "message": "Unable to find target foobar" 00:07:43.684 }' 00:07:43.684 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:43.684 { 00:07:43.684 "nqn": "nqn.2016-06.io.spdk:cnode17831", 00:07:43.684 "tgt_name": "foobar", 00:07:43.684 "method": "nvmf_create_subsystem", 00:07:43.684 "req_id": 1 00:07:43.684 } 00:07:43.684 Got JSON-RPC error response 00:07:43.684 response: 00:07:43.684 { 00:07:43.684 "code": -32603, 00:07:43.684 "message": "Unable to find target foobar" 00:07:43.684 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:43.684 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:43.684 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28782 00:07:43.942 [2024-07-12 16:56:43.564513] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28782: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:43.942 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:43.942 { 00:07:43.942 "nqn": "nqn.2016-06.io.spdk:cnode28782", 00:07:43.942 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:43.942 "method": "nvmf_create_subsystem", 00:07:43.942 "req_id": 1 00:07:43.942 } 00:07:43.942 Got JSON-RPC error response 00:07:43.942 response: 00:07:43.942 { 00:07:43.942 "code": -32602, 00:07:43.942 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:43.942 }' 00:07:43.942 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:43.942 { 00:07:43.942 "nqn": "nqn.2016-06.io.spdk:cnode28782", 00:07:43.942 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:43.942 "method": "nvmf_create_subsystem", 00:07:43.942 "req_id": 1 00:07:43.942 } 00:07:43.942 Got JSON-RPC error response 00:07:43.942 response: 00:07:43.942 { 00:07:43.942 "code": -32602, 00:07:43.942 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:43.942 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:43.942 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:43.942 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13333 00:07:44.200 [2024-07-12 16:56:43.861497] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13333: invalid model number 'SPDK_Controller' 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:44.200 { 00:07:44.200 "nqn": "nqn.2016-06.io.spdk:cnode13333", 00:07:44.200 "model_number": "SPDK_Controller\u001f", 00:07:44.200 "method": "nvmf_create_subsystem", 00:07:44.200 "req_id": 1 00:07:44.200 } 00:07:44.200 Got JSON-RPC error response 00:07:44.200 response: 00:07:44.200 { 00:07:44.200 "code": -32602, 00:07:44.200 "message": "Invalid MN SPDK_Controller\u001f" 00:07:44.200 }' 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:44.200 { 00:07:44.200 "nqn": "nqn.2016-06.io.spdk:cnode13333", 00:07:44.200 "model_number": "SPDK_Controller\u001f", 00:07:44.200 "method": "nvmf_create_subsystem", 00:07:44.200 "req_id": 1 00:07:44.200 } 00:07:44.200 Got JSON-RPC error response 00:07:44.200 response: 00:07:44.200 { 00:07:44.200 "code": -32602, 00:07:44.200 "message": "Invalid MN SPDK_Controller\u001f" 00:07:44.200 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.200 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.457 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '2#Td1[kg|Q_6KJF8emBF' 00:07:44.458 16:56:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '2#Td1[kg|Q_6KJF8emBF' nqn.2016-06.io.spdk:cnode15575 00:07:44.717 [2024-07-12 16:56:44.182488] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15575: invalid serial number '2#Td1[kg|Q_6KJF8emBF' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:44.717 { 00:07:44.717 "nqn": "nqn.2016-06.io.spdk:cnode15575", 00:07:44.717 "serial_number": "2#\u007fTd1[kg|Q_6KJF8emBF", 00:07:44.717 "method": "nvmf_create_subsystem", 00:07:44.717 "req_id": 1 00:07:44.717 } 00:07:44.717 Got JSON-RPC error response 00:07:44.717 response: 00:07:44.717 { 00:07:44.717 "code": -32602, 00:07:44.717 "message": "Invalid SN 2#\u007fTd1[kg|Q_6KJF8emBF" 00:07:44.717 }' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:44.717 { 00:07:44.717 "nqn": "nqn.2016-06.io.spdk:cnode15575", 00:07:44.717 "serial_number": "2#\u007fTd1[kg|Q_6KJF8emBF", 00:07:44.717 "method": "nvmf_create_subsystem", 00:07:44.717 "req_id": 1 00:07:44.717 } 00:07:44.717 Got JSON-RPC error response 00:07:44.717 response: 00:07:44.717 { 00:07:44.717 "code": -32602, 00:07:44.717 "message": "Invalid SN 2#\u007fTd1[kg|Q_6KJF8emBF" 00:07:44.717 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.717 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.718 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N' 00:07:44.719 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N' nqn.2016-06.io.spdk:cnode19069 00:07:44.977 [2024-07-12 16:56:44.571793] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19069: invalid model number 'Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N' 00:07:44.977 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:44.977 { 00:07:44.977 "nqn": "nqn.2016-06.io.spdk:cnode19069", 00:07:44.977 "model_number": "Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N", 00:07:44.977 "method": "nvmf_create_subsystem", 00:07:44.977 "req_id": 1 00:07:44.977 } 00:07:44.977 Got JSON-RPC error response 00:07:44.977 response: 00:07:44.977 { 00:07:44.977 "code": -32602, 00:07:44.977 "message": "Invalid MN Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N" 00:07:44.977 }' 00:07:44.977 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:44.977 { 00:07:44.977 "nqn": "nqn.2016-06.io.spdk:cnode19069", 00:07:44.977 "model_number": "Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N", 00:07:44.977 "method": "nvmf_create_subsystem", 00:07:44.977 "req_id": 1 00:07:44.977 } 00:07:44.977 Got JSON-RPC error response 00:07:44.977 response: 00:07:44.977 { 00:07:44.977 "code": -32602, 00:07:44.977 "message": "Invalid MN Ds$!@z5?9~bD2mE4o!|K@3wg0S:UFYOY uUI##z:N" 00:07:44.977 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:44.977 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:45.235 [2024-07-12 16:56:44.816679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.235 16:56:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:45.493 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:45.493 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:45.493 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:45.493 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:45.493 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:45.750 [2024-07-12 16:56:45.326369] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:45.750 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:45.750 { 00:07:45.750 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:45.750 "listen_address": { 00:07:45.750 "trtype": "tcp", 00:07:45.750 "traddr": "", 00:07:45.750 "trsvcid": "4421" 00:07:45.750 }, 00:07:45.750 "method": "nvmf_subsystem_remove_listener", 00:07:45.750 "req_id": 1 00:07:45.750 } 00:07:45.750 Got JSON-RPC error response 00:07:45.750 response: 00:07:45.750 { 00:07:45.751 "code": -32602, 00:07:45.751 "message": "Invalid parameters" 00:07:45.751 }' 00:07:45.751 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:45.751 { 00:07:45.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:45.751 "listen_address": { 00:07:45.751 "trtype": "tcp", 00:07:45.751 "traddr": "", 00:07:45.751 "trsvcid": "4421" 00:07:45.751 }, 00:07:45.751 "method": "nvmf_subsystem_remove_listener", 00:07:45.751 "req_id": 1 00:07:45.751 } 00:07:45.751 Got JSON-RPC error response 00:07:45.751 response: 00:07:45.751 { 00:07:45.751 "code": -32602, 00:07:45.751 "message": "Invalid parameters" 00:07:45.751 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:45.751 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27202 -i 0 00:07:46.008 [2024-07-12 16:56:45.587221] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27202: invalid cntlid range [0-65519] 00:07:46.008 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:46.008 { 00:07:46.008 "nqn": "nqn.2016-06.io.spdk:cnode27202", 00:07:46.008 "min_cntlid": 0, 00:07:46.008 "method": "nvmf_create_subsystem", 00:07:46.008 "req_id": 1 00:07:46.008 } 00:07:46.008 Got JSON-RPC error response 00:07:46.008 response: 00:07:46.008 { 00:07:46.008 "code": -32602, 00:07:46.008 "message": "Invalid cntlid range [0-65519]" 00:07:46.008 }' 00:07:46.008 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:46.008 { 00:07:46.008 "nqn": "nqn.2016-06.io.spdk:cnode27202", 00:07:46.008 "min_cntlid": 0, 00:07:46.008 "method": "nvmf_create_subsystem", 00:07:46.008 "req_id": 1 00:07:46.008 } 00:07:46.008 Got JSON-RPC error response 00:07:46.008 response: 00:07:46.008 { 00:07:46.008 "code": -32602, 00:07:46.008 "message": "Invalid cntlid range [0-65519]" 00:07:46.008 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.008 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15138 -i 65520 00:07:46.266 [2024-07-12 16:56:45.832038] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15138: invalid cntlid range [65520-65519] 00:07:46.266 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:46.266 { 00:07:46.266 "nqn": "nqn.2016-06.io.spdk:cnode15138", 00:07:46.266 "min_cntlid": 65520, 00:07:46.266 "method": "nvmf_create_subsystem", 00:07:46.266 "req_id": 1 00:07:46.266 } 00:07:46.266 Got JSON-RPC error response 00:07:46.266 response: 00:07:46.266 { 00:07:46.266 "code": -32602, 00:07:46.266 "message": "Invalid cntlid range [65520-65519]" 00:07:46.266 }' 00:07:46.266 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:46.266 { 00:07:46.266 "nqn": "nqn.2016-06.io.spdk:cnode15138", 00:07:46.266 "min_cntlid": 65520, 00:07:46.266 "method": "nvmf_create_subsystem", 00:07:46.266 "req_id": 1 00:07:46.266 } 00:07:46.266 Got JSON-RPC error response 00:07:46.266 response: 00:07:46.266 { 00:07:46.266 "code": -32602, 00:07:46.266 "message": "Invalid cntlid range [65520-65519]" 00:07:46.266 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.266 16:56:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4418 -I 0 00:07:46.524 [2024-07-12 16:56:46.076929] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4418: invalid cntlid range [1-0] 00:07:46.524 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:46.524 { 00:07:46.524 "nqn": "nqn.2016-06.io.spdk:cnode4418", 00:07:46.524 "max_cntlid": 0, 00:07:46.524 "method": "nvmf_create_subsystem", 00:07:46.524 "req_id": 1 00:07:46.524 } 00:07:46.524 Got JSON-RPC error response 00:07:46.524 response: 00:07:46.524 { 00:07:46.524 "code": -32602, 00:07:46.524 "message": "Invalid cntlid range [1-0]" 00:07:46.524 }' 00:07:46.524 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:46.524 { 00:07:46.524 "nqn": "nqn.2016-06.io.spdk:cnode4418", 00:07:46.524 "max_cntlid": 0, 00:07:46.524 "method": "nvmf_create_subsystem", 00:07:46.524 "req_id": 1 00:07:46.524 } 00:07:46.524 Got JSON-RPC error response 00:07:46.524 response: 00:07:46.524 { 00:07:46.524 "code": -32602, 00:07:46.524 "message": "Invalid cntlid range [1-0]" 00:07:46.524 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.524 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22115 -I 65520 00:07:46.782 [2024-07-12 16:56:46.321689] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22115: invalid cntlid range [1-65520] 00:07:46.782 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:46.782 { 00:07:46.782 "nqn": "nqn.2016-06.io.spdk:cnode22115", 00:07:46.782 "max_cntlid": 65520, 00:07:46.782 "method": "nvmf_create_subsystem", 00:07:46.782 "req_id": 1 00:07:46.782 } 00:07:46.782 Got JSON-RPC error response 00:07:46.782 response: 00:07:46.782 { 00:07:46.782 "code": -32602, 00:07:46.782 "message": "Invalid cntlid range [1-65520]" 00:07:46.782 }' 00:07:46.782 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:46.782 { 00:07:46.782 "nqn": "nqn.2016-06.io.spdk:cnode22115", 00:07:46.782 "max_cntlid": 65520, 00:07:46.782 "method": "nvmf_create_subsystem", 00:07:46.782 "req_id": 1 00:07:46.782 } 00:07:46.782 Got JSON-RPC error response 00:07:46.782 response: 00:07:46.782 { 00:07:46.782 "code": -32602, 00:07:46.782 "message": "Invalid cntlid range [1-65520]" 00:07:46.782 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.782 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30708 -i 6 -I 5 00:07:47.041 [2024-07-12 16:56:46.590605] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30708: invalid cntlid range [6-5] 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:47.041 { 00:07:47.041 "nqn": "nqn.2016-06.io.spdk:cnode30708", 00:07:47.041 "min_cntlid": 6, 00:07:47.041 "max_cntlid": 5, 00:07:47.041 "method": "nvmf_create_subsystem", 00:07:47.041 "req_id": 1 00:07:47.041 } 00:07:47.041 Got JSON-RPC error response 00:07:47.041 response: 00:07:47.041 { 00:07:47.041 "code": -32602, 00:07:47.041 "message": "Invalid cntlid range [6-5]" 00:07:47.041 }' 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:47.041 { 00:07:47.041 "nqn": "nqn.2016-06.io.spdk:cnode30708", 00:07:47.041 "min_cntlid": 6, 00:07:47.041 "max_cntlid": 5, 00:07:47.041 "method": "nvmf_create_subsystem", 00:07:47.041 "req_id": 1 00:07:47.041 } 00:07:47.041 Got JSON-RPC error response 00:07:47.041 response: 00:07:47.041 { 00:07:47.041 "code": -32602, 00:07:47.041 "message": "Invalid cntlid range [6-5]" 00:07:47.041 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:47.041 { 00:07:47.041 "name": "foobar", 00:07:47.041 "method": "nvmf_delete_target", 00:07:47.041 "req_id": 1 00:07:47.041 } 00:07:47.041 Got JSON-RPC error response 00:07:47.041 response: 00:07:47.041 { 00:07:47.041 "code": -32602, 00:07:47.041 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:47.041 }' 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:47.041 { 00:07:47.041 "name": "foobar", 00:07:47.041 "method": "nvmf_delete_target", 00:07:47.041 "req_id": 1 00:07:47.041 } 00:07:47.041 Got JSON-RPC error response 00:07:47.041 response: 00:07:47.041 { 00:07:47.041 "code": -32602, 00:07:47.041 "message": "The specified target doesn't exist, cannot delete it." 00:07:47.041 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.041 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.298 rmmod nvme_tcp 00:07:47.298 rmmod nvme_fabrics 00:07:47.298 rmmod nvme_keyring 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1039788 ']' 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1039788 00:07:47.298 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1039788 ']' 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1039788 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1039788 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1039788' 00:07:47.299 killing process with pid 1039788 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1039788 00:07:47.299 16:56:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1039788 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.558 16:56:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.461 16:56:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.461 00:07:49.461 real 0m8.900s 00:07:49.461 user 0m20.580s 00:07:49.461 sys 0m2.530s 00:07:49.461 16:56:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.461 16:56:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:49.461 ************************************ 00:07:49.461 END TEST nvmf_invalid 00:07:49.461 ************************************ 00:07:49.719 16:56:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:49.719 16:56:49 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:49.719 16:56:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:49.719 16:56:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.719 16:56:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.719 ************************************ 00:07:49.719 START TEST nvmf_abort 00:07:49.719 ************************************ 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:49.719 * Looking for test storage... 00:07:49.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.719 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.720 16:56:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:52.247 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:52.247 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:52.247 Found net devices under 0000:84:00.0: cvl_0_0 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:52.247 Found net devices under 0000:84:00.1: cvl_0_1 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:52.247 00:07:52.247 --- 10.0.0.2 ping statistics --- 00:07:52.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.247 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:52.247 00:07:52.247 --- 10.0.0.1 ping statistics --- 00:07:52.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.247 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1042444 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1042444 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1042444 ']' 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.247 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.247 [2024-07-12 16:56:51.561502] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:07:52.248 [2024-07-12 16:56:51.561604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.248 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.248 [2024-07-12 16:56:51.624686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.248 [2024-07-12 16:56:51.725112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.248 [2024-07-12 16:56:51.725176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.248 [2024-07-12 16:56:51.725198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.248 [2024-07-12 16:56:51.725209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.248 [2024-07-12 16:56:51.725219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.248 [2024-07-12 16:56:51.725312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.248 [2024-07-12 16:56:51.725387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.248 [2024-07-12 16:56:51.725391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 [2024-07-12 16:56:51.874793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 Malloc0 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 Delay0 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.248 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.505 [2024-07-12 16:56:51.951846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.505 16:56:51 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:52.505 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.505 [2024-07-12 16:56:52.016242] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:54.400 Initializing NVMe Controllers 00:07:54.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.400 controller IO queue size 128 less than required 00:07:54.400 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:54.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:54.400 Initialization complete. Launching workers. 00:07:54.400 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33179 00:07:54.400 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33240, failed to submit 62 00:07:54.400 success 33183, unsuccess 57, failed 0 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.657 rmmod nvme_tcp 00:07:54.657 rmmod nvme_fabrics 00:07:54.657 rmmod nvme_keyring 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1042444 ']' 00:07:54.657 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1042444 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1042444 ']' 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1042444 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1042444 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1042444' 00:07:54.658 killing process with pid 1042444 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1042444 00:07:54.658 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1042444 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.917 16:56:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.453 16:56:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.453 00:07:57.453 real 0m7.345s 00:07:57.453 user 0m10.350s 00:07:57.453 sys 0m2.611s 00:07:57.453 16:56:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.453 16:56:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 ************************************ 00:07:57.453 END TEST nvmf_abort 00:07:57.453 ************************************ 00:07:57.453 16:56:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:57.453 16:56:56 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.453 16:56:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.453 16:56:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.453 16:56:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.453 ************************************ 00:07:57.453 START TEST nvmf_ns_hotplug_stress 00:07:57.453 ************************************ 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.453 * Looking for test storage... 00:07:57.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.453 16:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.399 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:59.400 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:59.400 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:59.400 Found net devices under 0000:84:00.0: cvl_0_0 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:59.400 Found net devices under 0000:84:00.1: cvl_0_1 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:07:59.400 00:07:59.400 --- 10.0.0.2 ping statistics --- 00:07:59.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.400 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:07:59.400 00:07:59.400 --- 10.0.0.1 ping statistics --- 00:07:59.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.400 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.400 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1044693 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1044693 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1044693 ']' 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.401 16:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.401 [2024-07-12 16:56:58.989941] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:07:59.401 [2024-07-12 16:56:58.990038] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.401 [2024-07-12 16:56:59.059678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.659 [2024-07-12 16:56:59.174283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.659 [2024-07-12 16:56:59.174365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.659 [2024-07-12 16:56:59.174379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.659 [2024-07-12 16:56:59.174390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.659 [2024-07-12 16:56:59.174400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.659 [2024-07-12 16:56:59.174505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.659 [2024-07-12 16:56:59.174563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.659 [2024-07-12 16:56:59.174566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:59.659 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.917 [2024-07-12 16:56:59.595463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.175 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:00.431 16:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.688 [2024-07-12 16:57:00.178620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.688 16:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.945 16:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:01.201 Malloc0 00:08:01.201 16:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:01.458 Delay0 00:08:01.458 16:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.715 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:01.973 NULL1 00:08:01.973 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:02.230 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1045207 00:08:02.230 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:02.230 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:02.230 16:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.230 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.488 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.745 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:02.745 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:03.001 true 00:08:03.001 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:03.001 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.259 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.516 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:03.516 16:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:03.772 true 00:08:03.773 16:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:03.773 16:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.337 Read completed with error (sct=0, sc=11) 00:08:04.337 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.595 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.595 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:04.854 true 00:08:04.854 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:04.854 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.111 16:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.368 16:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.368 16:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:05.625 true 00:08:05.625 16:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:05.625 16:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.997 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.997 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.997 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:07.255 true 00:08:07.255 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:07.255 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.512 16:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.770 16:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.770 16:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:08.028 true 00:08:08.028 16:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:08.028 16:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.961 16:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.962 16:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:08.962 16:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:09.220 true 00:08:09.220 16:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:09.220 16:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.478 16:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.736 16:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:09.736 16:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:09.994 true 00:08:09.994 16:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:09.994 16:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.926 16:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.184 16:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:11.184 16:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:11.441 true 00:08:11.441 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:11.442 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.699 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.957 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:11.957 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:12.215 true 00:08:12.215 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:12.215 16:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.472 16:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.730 16:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:12.730 16:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:12.988 true 00:08:12.988 16:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:12.988 16:57:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.923 16:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.438 16:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:14.438 16:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:14.696 true 00:08:14.696 16:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:14.696 16:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.261 16:57:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.519 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:15.519 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:15.776 true 00:08:15.776 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:15.776 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.034 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.291 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:16.291 16:57:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.549 true 00:08:16.549 16:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:16.549 16:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.480 16:57:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.737 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:17.737 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:17.993 true 00:08:17.993 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:17.993 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.249 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.506 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:18.506 16:57:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:18.765 true 00:08:18.765 16:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:18.765 16:57:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.694 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.694 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.694 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.951 true 00:08:19.951 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:19.951 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.220 16:57:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.520 16:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:20.520 16:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:20.801 true 00:08:20.801 16:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:20.801 16:57:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.734 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.734 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:21.734 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:21.992 true 00:08:21.992 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:21.992 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.249 16:57:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.506 16:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:22.506 16:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.764 true 00:08:22.764 16:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:22.764 16:57:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.702 16:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.959 16:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:23.959 16:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:24.216 true 00:08:24.216 16:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:24.216 16:57:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.473 16:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.731 16:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:24.731 16:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:24.989 true 00:08:24.989 16:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:24.989 16:57:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.922 16:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.179 16:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:26.180 16:57:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:26.437 true 00:08:26.437 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:26.437 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.695 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.954 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:26.954 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:27.214 true 00:08:27.214 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:27.214 16:57:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.473 16:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.731 16:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:27.731 16:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:27.988 true 00:08:27.988 16:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:27.988 16:57:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.922 16:57:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.486 16:57:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:29.486 16:57:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:29.486 true 00:08:29.486 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:29.486 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.744 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.001 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:30.001 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:30.259 true 00:08:30.259 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:30.259 16:57:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.191 16:57:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.449 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:31.449 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:31.707 true 00:08:31.707 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:31.707 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.965 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.222 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:32.222 16:57:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:32.479 true 00:08:32.479 16:57:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:32.479 16:57:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.411 Initializing NVMe Controllers 00:08:33.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.411 Controller IO queue size 128, less than required. 00:08:33.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.411 Controller IO queue size 128, less than required. 00:08:33.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.411 Initialization complete. Launching workers. 00:08:33.411 ======================================================== 00:08:33.411 Latency(us) 00:08:33.411 Device Information : IOPS MiB/s Average min max 00:08:33.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 885.86 0.43 75324.03 2387.55 1013029.61 00:08:33.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10557.65 5.16 12087.99 3059.42 543682.37 00:08:33.411 ======================================================== 00:08:33.411 Total : 11443.51 5.59 16983.22 2387.55 1013029.61 00:08:33.411 00:08:33.411 16:57:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.668 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:33.668 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:33.925 true 00:08:33.925 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1045207 00:08:33.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1045207) - No such process 00:08:33.925 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1045207 00:08:33.925 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.181 16:57:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.438 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:34.438 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:34.438 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:34.438 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.438 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.695 null0 00:08:34.695 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.695 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.695 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:34.952 null1 00:08:34.952 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.952 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.952 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:35.209 null2 00:08:35.209 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.209 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.209 16:57:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:35.466 null3 00:08:35.466 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.466 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.466 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:35.723 null4 00:08:35.723 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.723 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.723 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:35.980 null5 00:08:35.980 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.980 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.980 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:36.237 null6 00:08:36.237 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.237 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.237 16:57:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:36.494 null7 00:08:36.494 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.494 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.494 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:36.494 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1049902 1049904 1049906 1049909 1049911 1049913 1049915 1049917 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.495 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.753 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.011 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.269 16:57:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.527 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.785 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.043 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.301 16:57:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.560 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.818 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.076 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.077 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.077 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.077 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.335 16:57:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.593 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.851 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.851 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.851 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.851 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.852 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.109 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.109 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.109 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.110 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.110 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.110 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.110 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.110 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.367 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.368 16:57:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.626 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.884 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.142 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.400 16:57:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.662 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.941 rmmod nvme_tcp 00:08:41.941 rmmod nvme_fabrics 00:08:41.941 rmmod nvme_keyring 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1044693 ']' 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1044693 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1044693 ']' 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1044693 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1044693 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1044693' 00:08:41.941 killing process with pid 1044693 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1044693 00:08:41.941 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1044693 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.205 16:57:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.775 16:57:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.775 00:08:44.775 real 0m47.344s 00:08:44.775 user 3m35.250s 00:08:44.775 sys 0m17.131s 00:08:44.775 16:57:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.775 16:57:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.775 ************************************ 00:08:44.775 END TEST nvmf_ns_hotplug_stress 00:08:44.775 ************************************ 00:08:44.775 16:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.775 16:57:43 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:44.775 16:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.775 16:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.775 16:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.775 ************************************ 00:08:44.775 START TEST nvmf_connect_stress 00:08:44.775 ************************************ 00:08:44.775 16:57:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:44.775 * Looking for test storage... 00:08:44.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.775 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.776 16:57:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.669 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:46.670 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:46.670 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:46.670 Found net devices under 0000:84:00.0: cvl_0_0 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:46.670 Found net devices under 0000:84:00.1: cvl_0_1 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:08:46.670 00:08:46.670 --- 10.0.0.2 ping statistics --- 00:08:46.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.670 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:46.670 00:08:46.670 --- 10.0.0.1 ping statistics --- 00:08:46.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.670 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1052679 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1052679 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1052679 ']' 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.670 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.928 [2024-07-12 16:57:46.376143] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:08:46.928 [2024-07-12 16:57:46.376244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.928 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.928 [2024-07-12 16:57:46.440295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.928 [2024-07-12 16:57:46.542827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.928 [2024-07-12 16:57:46.542886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.928 [2024-07-12 16:57:46.542909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.928 [2024-07-12 16:57:46.542920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.928 [2024-07-12 16:57:46.542931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.928 [2024-07-12 16:57:46.543033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.928 [2024-07-12 16:57:46.543089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.928 [2024-07-12 16:57:46.543092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 [2024-07-12 16:57:46.697211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 [2024-07-12 16:57:46.723917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 NULL1 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1052708 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.186 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.187 16:57:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.444 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.444 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:47.444 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.444 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.444 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.008 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.008 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:48.008 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.008 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.008 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.265 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.265 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:48.265 16:57:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.265 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.265 16:57:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.523 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.523 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:48.523 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.523 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.523 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.780 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.780 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:48.780 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.780 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.780 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.037 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.037 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:49.037 16:57:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.037 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.037 16:57:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.600 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.600 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:49.600 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.600 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.600 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.858 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.858 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:49.858 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.858 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.858 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.115 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.115 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:50.115 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.115 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.115 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.373 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.373 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:50.373 16:57:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.373 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.373 16:57:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.630 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.630 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:50.630 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.630 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.630 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.193 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.193 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:51.193 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.193 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.193 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.450 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.450 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:51.450 16:57:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.450 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.450 16:57:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.707 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.707 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:51.707 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.707 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.707 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.965 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.965 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:51.965 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.965 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.965 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.530 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.530 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:52.530 16:57:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.530 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.530 16:57:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.787 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.787 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:52.787 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.787 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.787 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.046 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.046 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:53.046 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.046 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.046 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.303 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.303 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:53.303 16:57:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.303 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.303 16:57:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.560 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.560 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:53.560 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.560 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.560 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.124 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.124 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:54.124 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.124 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.124 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.382 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.382 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:54.382 16:57:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.382 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.382 16:57:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.640 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.640 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:54.640 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.640 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.640 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.897 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.897 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:54.897 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.897 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.897 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.154 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.154 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:55.154 16:57:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.154 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.155 16:57:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.719 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.719 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:55.719 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.719 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.719 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.977 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.977 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:55.977 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.977 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.977 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.235 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.235 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:56.235 16:57:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.235 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.235 16:57:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.493 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.493 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:56.493 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.493 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.493 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.750 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.750 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:56.750 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.750 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.750 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.316 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.316 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:57.316 16:57:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.316 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.316 16:57:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.316 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052708 00:08:57.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1052708) - No such process 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1052708 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.574 rmmod nvme_tcp 00:08:57.574 rmmod nvme_fabrics 00:08:57.574 rmmod nvme_keyring 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1052679 ']' 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1052679 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1052679 ']' 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1052679 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1052679 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1052679' 00:08:57.574 killing process with pid 1052679 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1052679 00:08:57.574 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1052679 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.832 16:57:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.370 16:57:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.370 00:09:00.370 real 0m15.472s 00:09:00.370 user 0m37.959s 00:09:00.370 sys 0m6.431s 00:09:00.370 16:57:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.370 16:57:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.370 ************************************ 00:09:00.370 END TEST nvmf_connect_stress 00:09:00.370 ************************************ 00:09:00.370 16:57:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.370 16:57:59 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:00.370 16:57:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.370 16:57:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.370 16:57:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.370 ************************************ 00:09:00.370 START TEST nvmf_fused_ordering 00:09:00.370 ************************************ 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:00.370 * Looking for test storage... 00:09:00.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.370 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.371 16:57:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.270 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:02.271 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:02.271 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:02.271 Found net devices under 0000:84:00.0: cvl_0_0 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:02.271 Found net devices under 0000:84:00.1: cvl_0_1 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:09:02.271 00:09:02.271 --- 10.0.0.2 ping statistics --- 00:09:02.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.271 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:09:02.271 00:09:02.271 --- 10.0.0.1 ping statistics --- 00:09:02.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.271 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1055920 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1055920 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1055920 ']' 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.271 16:58:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 [2024-07-12 16:58:01.853823] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:02.271 [2024-07-12 16:58:01.853911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.271 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.271 [2024-07-12 16:58:01.917118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.529 [2024-07-12 16:58:02.023662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.529 [2024-07-12 16:58:02.023712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.529 [2024-07-12 16:58:02.023735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.529 [2024-07-12 16:58:02.023770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.529 [2024-07-12 16:58:02.023780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.529 [2024-07-12 16:58:02.023816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.529 [2024-07-12 16:58:02.172895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.529 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.530 [2024-07-12 16:58:02.189104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.530 NULL1 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.530 16:58:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:02.787 [2024-07-12 16:58:02.237113] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:02.787 [2024-07-12 16:58:02.237155] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056012 ] 00:09:02.787 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.044 Attached to nqn.2016-06.io.spdk:cnode1 00:09:03.044 Namespace ID: 1 size: 1GB 00:09:03.044 fused_ordering(0) 00:09:03.044 fused_ordering(1) 00:09:03.044 fused_ordering(2) 00:09:03.044 fused_ordering(3) 00:09:03.044 fused_ordering(4) 00:09:03.044 fused_ordering(5) 00:09:03.044 fused_ordering(6) 00:09:03.044 fused_ordering(7) 00:09:03.044 fused_ordering(8) 00:09:03.044 fused_ordering(9) 00:09:03.044 fused_ordering(10) 00:09:03.044 fused_ordering(11) 00:09:03.044 fused_ordering(12) 00:09:03.044 fused_ordering(13) 00:09:03.044 fused_ordering(14) 00:09:03.044 fused_ordering(15) 00:09:03.044 fused_ordering(16) 00:09:03.044 fused_ordering(17) 00:09:03.044 fused_ordering(18) 00:09:03.044 fused_ordering(19) 00:09:03.044 fused_ordering(20) 00:09:03.045 fused_ordering(21) 00:09:03.045 fused_ordering(22) 00:09:03.045 fused_ordering(23) 00:09:03.045 fused_ordering(24) 00:09:03.045 fused_ordering(25) 00:09:03.045 fused_ordering(26) 00:09:03.045 fused_ordering(27) 00:09:03.045 fused_ordering(28) 00:09:03.045 fused_ordering(29) 00:09:03.045 fused_ordering(30) 00:09:03.045 fused_ordering(31) 00:09:03.045 fused_ordering(32) 00:09:03.045 fused_ordering(33) 00:09:03.045 fused_ordering(34) 00:09:03.045 fused_ordering(35) 00:09:03.045 fused_ordering(36) 00:09:03.045 fused_ordering(37) 00:09:03.045 fused_ordering(38) 00:09:03.045 fused_ordering(39) 00:09:03.045 fused_ordering(40) 00:09:03.045 fused_ordering(41) 00:09:03.045 fused_ordering(42) 00:09:03.045 fused_ordering(43) 00:09:03.045 fused_ordering(44) 00:09:03.045 fused_ordering(45) 00:09:03.045 fused_ordering(46) 00:09:03.045 fused_ordering(47) 00:09:03.045 fused_ordering(48) 00:09:03.045 fused_ordering(49) 00:09:03.045 fused_ordering(50) 00:09:03.045 fused_ordering(51) 00:09:03.045 fused_ordering(52) 00:09:03.045 fused_ordering(53) 00:09:03.045 fused_ordering(54) 00:09:03.045 fused_ordering(55) 00:09:03.045 fused_ordering(56) 00:09:03.045 fused_ordering(57) 00:09:03.045 fused_ordering(58) 00:09:03.045 fused_ordering(59) 00:09:03.045 fused_ordering(60) 00:09:03.045 fused_ordering(61) 00:09:03.045 fused_ordering(62) 00:09:03.045 fused_ordering(63) 00:09:03.045 fused_ordering(64) 00:09:03.045 fused_ordering(65) 00:09:03.045 fused_ordering(66) 00:09:03.045 fused_ordering(67) 00:09:03.045 fused_ordering(68) 00:09:03.045 fused_ordering(69) 00:09:03.045 fused_ordering(70) 00:09:03.045 fused_ordering(71) 00:09:03.045 fused_ordering(72) 00:09:03.045 fused_ordering(73) 00:09:03.045 fused_ordering(74) 00:09:03.045 fused_ordering(75) 00:09:03.045 fused_ordering(76) 00:09:03.045 fused_ordering(77) 00:09:03.045 fused_ordering(78) 00:09:03.045 fused_ordering(79) 00:09:03.045 fused_ordering(80) 00:09:03.045 fused_ordering(81) 00:09:03.045 fused_ordering(82) 00:09:03.045 fused_ordering(83) 00:09:03.045 fused_ordering(84) 00:09:03.045 fused_ordering(85) 00:09:03.045 fused_ordering(86) 00:09:03.045 fused_ordering(87) 00:09:03.045 fused_ordering(88) 00:09:03.045 fused_ordering(89) 00:09:03.045 fused_ordering(90) 00:09:03.045 fused_ordering(91) 00:09:03.045 fused_ordering(92) 00:09:03.045 fused_ordering(93) 00:09:03.045 fused_ordering(94) 00:09:03.045 fused_ordering(95) 00:09:03.045 fused_ordering(96) 00:09:03.045 fused_ordering(97) 00:09:03.045 fused_ordering(98) 00:09:03.045 fused_ordering(99) 00:09:03.045 fused_ordering(100) 00:09:03.045 fused_ordering(101) 00:09:03.045 fused_ordering(102) 00:09:03.045 fused_ordering(103) 00:09:03.045 fused_ordering(104) 00:09:03.045 fused_ordering(105) 00:09:03.045 fused_ordering(106) 00:09:03.045 fused_ordering(107) 00:09:03.045 fused_ordering(108) 00:09:03.045 fused_ordering(109) 00:09:03.045 fused_ordering(110) 00:09:03.045 fused_ordering(111) 00:09:03.045 fused_ordering(112) 00:09:03.045 fused_ordering(113) 00:09:03.045 fused_ordering(114) 00:09:03.045 fused_ordering(115) 00:09:03.045 fused_ordering(116) 00:09:03.045 fused_ordering(117) 00:09:03.045 fused_ordering(118) 00:09:03.045 fused_ordering(119) 00:09:03.045 fused_ordering(120) 00:09:03.045 fused_ordering(121) 00:09:03.045 fused_ordering(122) 00:09:03.045 fused_ordering(123) 00:09:03.045 fused_ordering(124) 00:09:03.045 fused_ordering(125) 00:09:03.045 fused_ordering(126) 00:09:03.045 fused_ordering(127) 00:09:03.045 fused_ordering(128) 00:09:03.045 fused_ordering(129) 00:09:03.045 fused_ordering(130) 00:09:03.045 fused_ordering(131) 00:09:03.045 fused_ordering(132) 00:09:03.045 fused_ordering(133) 00:09:03.045 fused_ordering(134) 00:09:03.045 fused_ordering(135) 00:09:03.045 fused_ordering(136) 00:09:03.045 fused_ordering(137) 00:09:03.045 fused_ordering(138) 00:09:03.045 fused_ordering(139) 00:09:03.045 fused_ordering(140) 00:09:03.045 fused_ordering(141) 00:09:03.045 fused_ordering(142) 00:09:03.045 fused_ordering(143) 00:09:03.045 fused_ordering(144) 00:09:03.045 fused_ordering(145) 00:09:03.045 fused_ordering(146) 00:09:03.045 fused_ordering(147) 00:09:03.045 fused_ordering(148) 00:09:03.045 fused_ordering(149) 00:09:03.045 fused_ordering(150) 00:09:03.045 fused_ordering(151) 00:09:03.045 fused_ordering(152) 00:09:03.045 fused_ordering(153) 00:09:03.045 fused_ordering(154) 00:09:03.045 fused_ordering(155) 00:09:03.045 fused_ordering(156) 00:09:03.045 fused_ordering(157) 00:09:03.045 fused_ordering(158) 00:09:03.045 fused_ordering(159) 00:09:03.045 fused_ordering(160) 00:09:03.045 fused_ordering(161) 00:09:03.045 fused_ordering(162) 00:09:03.045 fused_ordering(163) 00:09:03.045 fused_ordering(164) 00:09:03.045 fused_ordering(165) 00:09:03.045 fused_ordering(166) 00:09:03.045 fused_ordering(167) 00:09:03.045 fused_ordering(168) 00:09:03.045 fused_ordering(169) 00:09:03.045 fused_ordering(170) 00:09:03.045 fused_ordering(171) 00:09:03.045 fused_ordering(172) 00:09:03.045 fused_ordering(173) 00:09:03.045 fused_ordering(174) 00:09:03.045 fused_ordering(175) 00:09:03.045 fused_ordering(176) 00:09:03.045 fused_ordering(177) 00:09:03.045 fused_ordering(178) 00:09:03.045 fused_ordering(179) 00:09:03.045 fused_ordering(180) 00:09:03.045 fused_ordering(181) 00:09:03.045 fused_ordering(182) 00:09:03.045 fused_ordering(183) 00:09:03.045 fused_ordering(184) 00:09:03.045 fused_ordering(185) 00:09:03.045 fused_ordering(186) 00:09:03.045 fused_ordering(187) 00:09:03.045 fused_ordering(188) 00:09:03.045 fused_ordering(189) 00:09:03.045 fused_ordering(190) 00:09:03.045 fused_ordering(191) 00:09:03.045 fused_ordering(192) 00:09:03.045 fused_ordering(193) 00:09:03.045 fused_ordering(194) 00:09:03.045 fused_ordering(195) 00:09:03.045 fused_ordering(196) 00:09:03.045 fused_ordering(197) 00:09:03.045 fused_ordering(198) 00:09:03.045 fused_ordering(199) 00:09:03.045 fused_ordering(200) 00:09:03.045 fused_ordering(201) 00:09:03.045 fused_ordering(202) 00:09:03.045 fused_ordering(203) 00:09:03.045 fused_ordering(204) 00:09:03.045 fused_ordering(205) 00:09:03.317 fused_ordering(206) 00:09:03.317 fused_ordering(207) 00:09:03.317 fused_ordering(208) 00:09:03.317 fused_ordering(209) 00:09:03.317 fused_ordering(210) 00:09:03.317 fused_ordering(211) 00:09:03.317 fused_ordering(212) 00:09:03.317 fused_ordering(213) 00:09:03.317 fused_ordering(214) 00:09:03.317 fused_ordering(215) 00:09:03.317 fused_ordering(216) 00:09:03.317 fused_ordering(217) 00:09:03.317 fused_ordering(218) 00:09:03.317 fused_ordering(219) 00:09:03.317 fused_ordering(220) 00:09:03.317 fused_ordering(221) 00:09:03.317 fused_ordering(222) 00:09:03.317 fused_ordering(223) 00:09:03.317 fused_ordering(224) 00:09:03.317 fused_ordering(225) 00:09:03.317 fused_ordering(226) 00:09:03.317 fused_ordering(227) 00:09:03.317 fused_ordering(228) 00:09:03.317 fused_ordering(229) 00:09:03.317 fused_ordering(230) 00:09:03.317 fused_ordering(231) 00:09:03.317 fused_ordering(232) 00:09:03.317 fused_ordering(233) 00:09:03.317 fused_ordering(234) 00:09:03.317 fused_ordering(235) 00:09:03.317 fused_ordering(236) 00:09:03.317 fused_ordering(237) 00:09:03.317 fused_ordering(238) 00:09:03.317 fused_ordering(239) 00:09:03.317 fused_ordering(240) 00:09:03.317 fused_ordering(241) 00:09:03.317 fused_ordering(242) 00:09:03.317 fused_ordering(243) 00:09:03.317 fused_ordering(244) 00:09:03.317 fused_ordering(245) 00:09:03.317 fused_ordering(246) 00:09:03.317 fused_ordering(247) 00:09:03.317 fused_ordering(248) 00:09:03.317 fused_ordering(249) 00:09:03.317 fused_ordering(250) 00:09:03.317 fused_ordering(251) 00:09:03.317 fused_ordering(252) 00:09:03.317 fused_ordering(253) 00:09:03.317 fused_ordering(254) 00:09:03.317 fused_ordering(255) 00:09:03.317 fused_ordering(256) 00:09:03.317 fused_ordering(257) 00:09:03.317 fused_ordering(258) 00:09:03.317 fused_ordering(259) 00:09:03.317 fused_ordering(260) 00:09:03.317 fused_ordering(261) 00:09:03.317 fused_ordering(262) 00:09:03.317 fused_ordering(263) 00:09:03.317 fused_ordering(264) 00:09:03.317 fused_ordering(265) 00:09:03.317 fused_ordering(266) 00:09:03.317 fused_ordering(267) 00:09:03.317 fused_ordering(268) 00:09:03.317 fused_ordering(269) 00:09:03.317 fused_ordering(270) 00:09:03.317 fused_ordering(271) 00:09:03.317 fused_ordering(272) 00:09:03.317 fused_ordering(273) 00:09:03.317 fused_ordering(274) 00:09:03.317 fused_ordering(275) 00:09:03.317 fused_ordering(276) 00:09:03.317 fused_ordering(277) 00:09:03.317 fused_ordering(278) 00:09:03.317 fused_ordering(279) 00:09:03.317 fused_ordering(280) 00:09:03.317 fused_ordering(281) 00:09:03.317 fused_ordering(282) 00:09:03.317 fused_ordering(283) 00:09:03.317 fused_ordering(284) 00:09:03.317 fused_ordering(285) 00:09:03.317 fused_ordering(286) 00:09:03.317 fused_ordering(287) 00:09:03.317 fused_ordering(288) 00:09:03.317 fused_ordering(289) 00:09:03.317 fused_ordering(290) 00:09:03.317 fused_ordering(291) 00:09:03.317 fused_ordering(292) 00:09:03.317 fused_ordering(293) 00:09:03.317 fused_ordering(294) 00:09:03.317 fused_ordering(295) 00:09:03.317 fused_ordering(296) 00:09:03.317 fused_ordering(297) 00:09:03.317 fused_ordering(298) 00:09:03.317 fused_ordering(299) 00:09:03.317 fused_ordering(300) 00:09:03.317 fused_ordering(301) 00:09:03.317 fused_ordering(302) 00:09:03.317 fused_ordering(303) 00:09:03.317 fused_ordering(304) 00:09:03.317 fused_ordering(305) 00:09:03.317 fused_ordering(306) 00:09:03.317 fused_ordering(307) 00:09:03.317 fused_ordering(308) 00:09:03.317 fused_ordering(309) 00:09:03.317 fused_ordering(310) 00:09:03.317 fused_ordering(311) 00:09:03.317 fused_ordering(312) 00:09:03.317 fused_ordering(313) 00:09:03.317 fused_ordering(314) 00:09:03.317 fused_ordering(315) 00:09:03.317 fused_ordering(316) 00:09:03.317 fused_ordering(317) 00:09:03.317 fused_ordering(318) 00:09:03.317 fused_ordering(319) 00:09:03.317 fused_ordering(320) 00:09:03.317 fused_ordering(321) 00:09:03.317 fused_ordering(322) 00:09:03.317 fused_ordering(323) 00:09:03.317 fused_ordering(324) 00:09:03.317 fused_ordering(325) 00:09:03.317 fused_ordering(326) 00:09:03.317 fused_ordering(327) 00:09:03.317 fused_ordering(328) 00:09:03.317 fused_ordering(329) 00:09:03.317 fused_ordering(330) 00:09:03.317 fused_ordering(331) 00:09:03.317 fused_ordering(332) 00:09:03.317 fused_ordering(333) 00:09:03.317 fused_ordering(334) 00:09:03.317 fused_ordering(335) 00:09:03.317 fused_ordering(336) 00:09:03.317 fused_ordering(337) 00:09:03.317 fused_ordering(338) 00:09:03.317 fused_ordering(339) 00:09:03.317 fused_ordering(340) 00:09:03.317 fused_ordering(341) 00:09:03.317 fused_ordering(342) 00:09:03.317 fused_ordering(343) 00:09:03.317 fused_ordering(344) 00:09:03.317 fused_ordering(345) 00:09:03.317 fused_ordering(346) 00:09:03.317 fused_ordering(347) 00:09:03.317 fused_ordering(348) 00:09:03.317 fused_ordering(349) 00:09:03.317 fused_ordering(350) 00:09:03.317 fused_ordering(351) 00:09:03.317 fused_ordering(352) 00:09:03.317 fused_ordering(353) 00:09:03.317 fused_ordering(354) 00:09:03.317 fused_ordering(355) 00:09:03.317 fused_ordering(356) 00:09:03.317 fused_ordering(357) 00:09:03.317 fused_ordering(358) 00:09:03.317 fused_ordering(359) 00:09:03.317 fused_ordering(360) 00:09:03.317 fused_ordering(361) 00:09:03.317 fused_ordering(362) 00:09:03.317 fused_ordering(363) 00:09:03.317 fused_ordering(364) 00:09:03.317 fused_ordering(365) 00:09:03.317 fused_ordering(366) 00:09:03.317 fused_ordering(367) 00:09:03.317 fused_ordering(368) 00:09:03.317 fused_ordering(369) 00:09:03.317 fused_ordering(370) 00:09:03.317 fused_ordering(371) 00:09:03.317 fused_ordering(372) 00:09:03.317 fused_ordering(373) 00:09:03.317 fused_ordering(374) 00:09:03.317 fused_ordering(375) 00:09:03.317 fused_ordering(376) 00:09:03.317 fused_ordering(377) 00:09:03.317 fused_ordering(378) 00:09:03.317 fused_ordering(379) 00:09:03.317 fused_ordering(380) 00:09:03.317 fused_ordering(381) 00:09:03.317 fused_ordering(382) 00:09:03.317 fused_ordering(383) 00:09:03.317 fused_ordering(384) 00:09:03.317 fused_ordering(385) 00:09:03.317 fused_ordering(386) 00:09:03.317 fused_ordering(387) 00:09:03.317 fused_ordering(388) 00:09:03.317 fused_ordering(389) 00:09:03.317 fused_ordering(390) 00:09:03.317 fused_ordering(391) 00:09:03.317 fused_ordering(392) 00:09:03.317 fused_ordering(393) 00:09:03.317 fused_ordering(394) 00:09:03.317 fused_ordering(395) 00:09:03.317 fused_ordering(396) 00:09:03.317 fused_ordering(397) 00:09:03.317 fused_ordering(398) 00:09:03.317 fused_ordering(399) 00:09:03.317 fused_ordering(400) 00:09:03.317 fused_ordering(401) 00:09:03.317 fused_ordering(402) 00:09:03.317 fused_ordering(403) 00:09:03.317 fused_ordering(404) 00:09:03.317 fused_ordering(405) 00:09:03.317 fused_ordering(406) 00:09:03.317 fused_ordering(407) 00:09:03.317 fused_ordering(408) 00:09:03.317 fused_ordering(409) 00:09:03.317 fused_ordering(410) 00:09:03.885 fused_ordering(411) 00:09:03.885 fused_ordering(412) 00:09:03.885 fused_ordering(413) 00:09:03.885 fused_ordering(414) 00:09:03.885 fused_ordering(415) 00:09:03.885 fused_ordering(416) 00:09:03.885 fused_ordering(417) 00:09:03.885 fused_ordering(418) 00:09:03.885 fused_ordering(419) 00:09:03.885 fused_ordering(420) 00:09:03.885 fused_ordering(421) 00:09:03.885 fused_ordering(422) 00:09:03.885 fused_ordering(423) 00:09:03.885 fused_ordering(424) 00:09:03.885 fused_ordering(425) 00:09:03.885 fused_ordering(426) 00:09:03.885 fused_ordering(427) 00:09:03.885 fused_ordering(428) 00:09:03.885 fused_ordering(429) 00:09:03.885 fused_ordering(430) 00:09:03.885 fused_ordering(431) 00:09:03.885 fused_ordering(432) 00:09:03.885 fused_ordering(433) 00:09:03.885 fused_ordering(434) 00:09:03.885 fused_ordering(435) 00:09:03.885 fused_ordering(436) 00:09:03.885 fused_ordering(437) 00:09:03.885 fused_ordering(438) 00:09:03.885 fused_ordering(439) 00:09:03.885 fused_ordering(440) 00:09:03.885 fused_ordering(441) 00:09:03.885 fused_ordering(442) 00:09:03.885 fused_ordering(443) 00:09:03.885 fused_ordering(444) 00:09:03.885 fused_ordering(445) 00:09:03.885 fused_ordering(446) 00:09:03.885 fused_ordering(447) 00:09:03.885 fused_ordering(448) 00:09:03.885 fused_ordering(449) 00:09:03.885 fused_ordering(450) 00:09:03.885 fused_ordering(451) 00:09:03.885 fused_ordering(452) 00:09:03.885 fused_ordering(453) 00:09:03.885 fused_ordering(454) 00:09:03.885 fused_ordering(455) 00:09:03.885 fused_ordering(456) 00:09:03.885 fused_ordering(457) 00:09:03.885 fused_ordering(458) 00:09:03.885 fused_ordering(459) 00:09:03.885 fused_ordering(460) 00:09:03.885 fused_ordering(461) 00:09:03.885 fused_ordering(462) 00:09:03.885 fused_ordering(463) 00:09:03.885 fused_ordering(464) 00:09:03.885 fused_ordering(465) 00:09:03.885 fused_ordering(466) 00:09:03.885 fused_ordering(467) 00:09:03.885 fused_ordering(468) 00:09:03.885 fused_ordering(469) 00:09:03.885 fused_ordering(470) 00:09:03.885 fused_ordering(471) 00:09:03.885 fused_ordering(472) 00:09:03.885 fused_ordering(473) 00:09:03.885 fused_ordering(474) 00:09:03.885 fused_ordering(475) 00:09:03.885 fused_ordering(476) 00:09:03.885 fused_ordering(477) 00:09:03.885 fused_ordering(478) 00:09:03.885 fused_ordering(479) 00:09:03.885 fused_ordering(480) 00:09:03.885 fused_ordering(481) 00:09:03.885 fused_ordering(482) 00:09:03.885 fused_ordering(483) 00:09:03.885 fused_ordering(484) 00:09:03.885 fused_ordering(485) 00:09:03.885 fused_ordering(486) 00:09:03.885 fused_ordering(487) 00:09:03.885 fused_ordering(488) 00:09:03.885 fused_ordering(489) 00:09:03.885 fused_ordering(490) 00:09:03.885 fused_ordering(491) 00:09:03.885 fused_ordering(492) 00:09:03.885 fused_ordering(493) 00:09:03.885 fused_ordering(494) 00:09:03.885 fused_ordering(495) 00:09:03.885 fused_ordering(496) 00:09:03.885 fused_ordering(497) 00:09:03.885 fused_ordering(498) 00:09:03.885 fused_ordering(499) 00:09:03.885 fused_ordering(500) 00:09:03.885 fused_ordering(501) 00:09:03.885 fused_ordering(502) 00:09:03.885 fused_ordering(503) 00:09:03.885 fused_ordering(504) 00:09:03.885 fused_ordering(505) 00:09:03.885 fused_ordering(506) 00:09:03.885 fused_ordering(507) 00:09:03.885 fused_ordering(508) 00:09:03.885 fused_ordering(509) 00:09:03.885 fused_ordering(510) 00:09:03.885 fused_ordering(511) 00:09:03.885 fused_ordering(512) 00:09:03.885 fused_ordering(513) 00:09:03.885 fused_ordering(514) 00:09:03.885 fused_ordering(515) 00:09:03.885 fused_ordering(516) 00:09:03.885 fused_ordering(517) 00:09:03.885 fused_ordering(518) 00:09:03.885 fused_ordering(519) 00:09:03.885 fused_ordering(520) 00:09:03.885 fused_ordering(521) 00:09:03.885 fused_ordering(522) 00:09:03.885 fused_ordering(523) 00:09:03.885 fused_ordering(524) 00:09:03.885 fused_ordering(525) 00:09:03.885 fused_ordering(526) 00:09:03.885 fused_ordering(527) 00:09:03.885 fused_ordering(528) 00:09:03.885 fused_ordering(529) 00:09:03.885 fused_ordering(530) 00:09:03.885 fused_ordering(531) 00:09:03.885 fused_ordering(532) 00:09:03.885 fused_ordering(533) 00:09:03.885 fused_ordering(534) 00:09:03.885 fused_ordering(535) 00:09:03.885 fused_ordering(536) 00:09:03.885 fused_ordering(537) 00:09:03.885 fused_ordering(538) 00:09:03.885 fused_ordering(539) 00:09:03.885 fused_ordering(540) 00:09:03.885 fused_ordering(541) 00:09:03.885 fused_ordering(542) 00:09:03.885 fused_ordering(543) 00:09:03.885 fused_ordering(544) 00:09:03.885 fused_ordering(545) 00:09:03.885 fused_ordering(546) 00:09:03.885 fused_ordering(547) 00:09:03.885 fused_ordering(548) 00:09:03.885 fused_ordering(549) 00:09:03.885 fused_ordering(550) 00:09:03.885 fused_ordering(551) 00:09:03.885 fused_ordering(552) 00:09:03.885 fused_ordering(553) 00:09:03.885 fused_ordering(554) 00:09:03.885 fused_ordering(555) 00:09:03.885 fused_ordering(556) 00:09:03.885 fused_ordering(557) 00:09:03.885 fused_ordering(558) 00:09:03.885 fused_ordering(559) 00:09:03.885 fused_ordering(560) 00:09:03.885 fused_ordering(561) 00:09:03.885 fused_ordering(562) 00:09:03.885 fused_ordering(563) 00:09:03.885 fused_ordering(564) 00:09:03.885 fused_ordering(565) 00:09:03.885 fused_ordering(566) 00:09:03.885 fused_ordering(567) 00:09:03.885 fused_ordering(568) 00:09:03.885 fused_ordering(569) 00:09:03.885 fused_ordering(570) 00:09:03.885 fused_ordering(571) 00:09:03.885 fused_ordering(572) 00:09:03.885 fused_ordering(573) 00:09:03.885 fused_ordering(574) 00:09:03.885 fused_ordering(575) 00:09:03.885 fused_ordering(576) 00:09:03.885 fused_ordering(577) 00:09:03.885 fused_ordering(578) 00:09:03.885 fused_ordering(579) 00:09:03.885 fused_ordering(580) 00:09:03.885 fused_ordering(581) 00:09:03.885 fused_ordering(582) 00:09:03.885 fused_ordering(583) 00:09:03.885 fused_ordering(584) 00:09:03.885 fused_ordering(585) 00:09:03.885 fused_ordering(586) 00:09:03.885 fused_ordering(587) 00:09:03.885 fused_ordering(588) 00:09:03.885 fused_ordering(589) 00:09:03.885 fused_ordering(590) 00:09:03.885 fused_ordering(591) 00:09:03.885 fused_ordering(592) 00:09:03.885 fused_ordering(593) 00:09:03.885 fused_ordering(594) 00:09:03.885 fused_ordering(595) 00:09:03.885 fused_ordering(596) 00:09:03.885 fused_ordering(597) 00:09:03.885 fused_ordering(598) 00:09:03.885 fused_ordering(599) 00:09:03.885 fused_ordering(600) 00:09:03.885 fused_ordering(601) 00:09:03.885 fused_ordering(602) 00:09:03.885 fused_ordering(603) 00:09:03.885 fused_ordering(604) 00:09:03.885 fused_ordering(605) 00:09:03.885 fused_ordering(606) 00:09:03.885 fused_ordering(607) 00:09:03.885 fused_ordering(608) 00:09:03.885 fused_ordering(609) 00:09:03.885 fused_ordering(610) 00:09:03.885 fused_ordering(611) 00:09:03.885 fused_ordering(612) 00:09:03.885 fused_ordering(613) 00:09:03.885 fused_ordering(614) 00:09:03.885 fused_ordering(615) 00:09:04.450 fused_ordering(616) 00:09:04.450 fused_ordering(617) 00:09:04.450 fused_ordering(618) 00:09:04.450 fused_ordering(619) 00:09:04.450 fused_ordering(620) 00:09:04.450 fused_ordering(621) 00:09:04.450 fused_ordering(622) 00:09:04.450 fused_ordering(623) 00:09:04.450 fused_ordering(624) 00:09:04.450 fused_ordering(625) 00:09:04.450 fused_ordering(626) 00:09:04.450 fused_ordering(627) 00:09:04.450 fused_ordering(628) 00:09:04.450 fused_ordering(629) 00:09:04.450 fused_ordering(630) 00:09:04.450 fused_ordering(631) 00:09:04.450 fused_ordering(632) 00:09:04.450 fused_ordering(633) 00:09:04.450 fused_ordering(634) 00:09:04.450 fused_ordering(635) 00:09:04.450 fused_ordering(636) 00:09:04.450 fused_ordering(637) 00:09:04.450 fused_ordering(638) 00:09:04.450 fused_ordering(639) 00:09:04.450 fused_ordering(640) 00:09:04.450 fused_ordering(641) 00:09:04.450 fused_ordering(642) 00:09:04.450 fused_ordering(643) 00:09:04.450 fused_ordering(644) 00:09:04.450 fused_ordering(645) 00:09:04.450 fused_ordering(646) 00:09:04.450 fused_ordering(647) 00:09:04.450 fused_ordering(648) 00:09:04.450 fused_ordering(649) 00:09:04.450 fused_ordering(650) 00:09:04.450 fused_ordering(651) 00:09:04.450 fused_ordering(652) 00:09:04.450 fused_ordering(653) 00:09:04.450 fused_ordering(654) 00:09:04.450 fused_ordering(655) 00:09:04.450 fused_ordering(656) 00:09:04.450 fused_ordering(657) 00:09:04.450 fused_ordering(658) 00:09:04.450 fused_ordering(659) 00:09:04.450 fused_ordering(660) 00:09:04.450 fused_ordering(661) 00:09:04.450 fused_ordering(662) 00:09:04.450 fused_ordering(663) 00:09:04.450 fused_ordering(664) 00:09:04.450 fused_ordering(665) 00:09:04.450 fused_ordering(666) 00:09:04.450 fused_ordering(667) 00:09:04.450 fused_ordering(668) 00:09:04.450 fused_ordering(669) 00:09:04.450 fused_ordering(670) 00:09:04.450 fused_ordering(671) 00:09:04.450 fused_ordering(672) 00:09:04.450 fused_ordering(673) 00:09:04.450 fused_ordering(674) 00:09:04.450 fused_ordering(675) 00:09:04.450 fused_ordering(676) 00:09:04.450 fused_ordering(677) 00:09:04.450 fused_ordering(678) 00:09:04.450 fused_ordering(679) 00:09:04.450 fused_ordering(680) 00:09:04.450 fused_ordering(681) 00:09:04.450 fused_ordering(682) 00:09:04.450 fused_ordering(683) 00:09:04.450 fused_ordering(684) 00:09:04.450 fused_ordering(685) 00:09:04.450 fused_ordering(686) 00:09:04.450 fused_ordering(687) 00:09:04.450 fused_ordering(688) 00:09:04.450 fused_ordering(689) 00:09:04.450 fused_ordering(690) 00:09:04.450 fused_ordering(691) 00:09:04.450 fused_ordering(692) 00:09:04.450 fused_ordering(693) 00:09:04.450 fused_ordering(694) 00:09:04.450 fused_ordering(695) 00:09:04.450 fused_ordering(696) 00:09:04.450 fused_ordering(697) 00:09:04.450 fused_ordering(698) 00:09:04.450 fused_ordering(699) 00:09:04.450 fused_ordering(700) 00:09:04.450 fused_ordering(701) 00:09:04.450 fused_ordering(702) 00:09:04.450 fused_ordering(703) 00:09:04.450 fused_ordering(704) 00:09:04.450 fused_ordering(705) 00:09:04.450 fused_ordering(706) 00:09:04.450 fused_ordering(707) 00:09:04.450 fused_ordering(708) 00:09:04.450 fused_ordering(709) 00:09:04.450 fused_ordering(710) 00:09:04.450 fused_ordering(711) 00:09:04.450 fused_ordering(712) 00:09:04.450 fused_ordering(713) 00:09:04.450 fused_ordering(714) 00:09:04.450 fused_ordering(715) 00:09:04.450 fused_ordering(716) 00:09:04.450 fused_ordering(717) 00:09:04.450 fused_ordering(718) 00:09:04.450 fused_ordering(719) 00:09:04.450 fused_ordering(720) 00:09:04.450 fused_ordering(721) 00:09:04.450 fused_ordering(722) 00:09:04.450 fused_ordering(723) 00:09:04.450 fused_ordering(724) 00:09:04.450 fused_ordering(725) 00:09:04.450 fused_ordering(726) 00:09:04.450 fused_ordering(727) 00:09:04.450 fused_ordering(728) 00:09:04.450 fused_ordering(729) 00:09:04.450 fused_ordering(730) 00:09:04.450 fused_ordering(731) 00:09:04.450 fused_ordering(732) 00:09:04.450 fused_ordering(733) 00:09:04.450 fused_ordering(734) 00:09:04.450 fused_ordering(735) 00:09:04.450 fused_ordering(736) 00:09:04.450 fused_ordering(737) 00:09:04.450 fused_ordering(738) 00:09:04.450 fused_ordering(739) 00:09:04.450 fused_ordering(740) 00:09:04.450 fused_ordering(741) 00:09:04.450 fused_ordering(742) 00:09:04.450 fused_ordering(743) 00:09:04.450 fused_ordering(744) 00:09:04.450 fused_ordering(745) 00:09:04.450 fused_ordering(746) 00:09:04.450 fused_ordering(747) 00:09:04.450 fused_ordering(748) 00:09:04.450 fused_ordering(749) 00:09:04.450 fused_ordering(750) 00:09:04.450 fused_ordering(751) 00:09:04.450 fused_ordering(752) 00:09:04.450 fused_ordering(753) 00:09:04.450 fused_ordering(754) 00:09:04.450 fused_ordering(755) 00:09:04.450 fused_ordering(756) 00:09:04.450 fused_ordering(757) 00:09:04.450 fused_ordering(758) 00:09:04.450 fused_ordering(759) 00:09:04.450 fused_ordering(760) 00:09:04.450 fused_ordering(761) 00:09:04.450 fused_ordering(762) 00:09:04.450 fused_ordering(763) 00:09:04.450 fused_ordering(764) 00:09:04.450 fused_ordering(765) 00:09:04.450 fused_ordering(766) 00:09:04.450 fused_ordering(767) 00:09:04.450 fused_ordering(768) 00:09:04.450 fused_ordering(769) 00:09:04.450 fused_ordering(770) 00:09:04.450 fused_ordering(771) 00:09:04.450 fused_ordering(772) 00:09:04.450 fused_ordering(773) 00:09:04.450 fused_ordering(774) 00:09:04.450 fused_ordering(775) 00:09:04.450 fused_ordering(776) 00:09:04.450 fused_ordering(777) 00:09:04.450 fused_ordering(778) 00:09:04.450 fused_ordering(779) 00:09:04.450 fused_ordering(780) 00:09:04.450 fused_ordering(781) 00:09:04.450 fused_ordering(782) 00:09:04.450 fused_ordering(783) 00:09:04.450 fused_ordering(784) 00:09:04.450 fused_ordering(785) 00:09:04.450 fused_ordering(786) 00:09:04.450 fused_ordering(787) 00:09:04.450 fused_ordering(788) 00:09:04.450 fused_ordering(789) 00:09:04.450 fused_ordering(790) 00:09:04.450 fused_ordering(791) 00:09:04.450 fused_ordering(792) 00:09:04.451 fused_ordering(793) 00:09:04.451 fused_ordering(794) 00:09:04.451 fused_ordering(795) 00:09:04.451 fused_ordering(796) 00:09:04.451 fused_ordering(797) 00:09:04.451 fused_ordering(798) 00:09:04.451 fused_ordering(799) 00:09:04.451 fused_ordering(800) 00:09:04.451 fused_ordering(801) 00:09:04.451 fused_ordering(802) 00:09:04.451 fused_ordering(803) 00:09:04.451 fused_ordering(804) 00:09:04.451 fused_ordering(805) 00:09:04.451 fused_ordering(806) 00:09:04.451 fused_ordering(807) 00:09:04.451 fused_ordering(808) 00:09:04.451 fused_ordering(809) 00:09:04.451 fused_ordering(810) 00:09:04.451 fused_ordering(811) 00:09:04.451 fused_ordering(812) 00:09:04.451 fused_ordering(813) 00:09:04.451 fused_ordering(814) 00:09:04.451 fused_ordering(815) 00:09:04.451 fused_ordering(816) 00:09:04.451 fused_ordering(817) 00:09:04.451 fused_ordering(818) 00:09:04.451 fused_ordering(819) 00:09:04.451 fused_ordering(820) 00:09:05.016 fused_ordering(821) 00:09:05.016 fused_ordering(822) 00:09:05.016 fused_ordering(823) 00:09:05.016 fused_ordering(824) 00:09:05.016 fused_ordering(825) 00:09:05.016 fused_ordering(826) 00:09:05.016 fused_ordering(827) 00:09:05.016 fused_ordering(828) 00:09:05.016 fused_ordering(829) 00:09:05.016 fused_ordering(830) 00:09:05.016 fused_ordering(831) 00:09:05.016 fused_ordering(832) 00:09:05.016 fused_ordering(833) 00:09:05.016 fused_ordering(834) 00:09:05.016 fused_ordering(835) 00:09:05.016 fused_ordering(836) 00:09:05.016 fused_ordering(837) 00:09:05.016 fused_ordering(838) 00:09:05.016 fused_ordering(839) 00:09:05.016 fused_ordering(840) 00:09:05.016 fused_ordering(841) 00:09:05.016 fused_ordering(842) 00:09:05.016 fused_ordering(843) 00:09:05.016 fused_ordering(844) 00:09:05.016 fused_ordering(845) 00:09:05.016 fused_ordering(846) 00:09:05.016 fused_ordering(847) 00:09:05.016 fused_ordering(848) 00:09:05.016 fused_ordering(849) 00:09:05.016 fused_ordering(850) 00:09:05.016 fused_ordering(851) 00:09:05.016 fused_ordering(852) 00:09:05.016 fused_ordering(853) 00:09:05.016 fused_ordering(854) 00:09:05.016 fused_ordering(855) 00:09:05.016 fused_ordering(856) 00:09:05.016 fused_ordering(857) 00:09:05.016 fused_ordering(858) 00:09:05.016 fused_ordering(859) 00:09:05.016 fused_ordering(860) 00:09:05.016 fused_ordering(861) 00:09:05.016 fused_ordering(862) 00:09:05.016 fused_ordering(863) 00:09:05.016 fused_ordering(864) 00:09:05.016 fused_ordering(865) 00:09:05.016 fused_ordering(866) 00:09:05.016 fused_ordering(867) 00:09:05.016 fused_ordering(868) 00:09:05.016 fused_ordering(869) 00:09:05.016 fused_ordering(870) 00:09:05.016 fused_ordering(871) 00:09:05.016 fused_ordering(872) 00:09:05.016 fused_ordering(873) 00:09:05.016 fused_ordering(874) 00:09:05.016 fused_ordering(875) 00:09:05.016 fused_ordering(876) 00:09:05.016 fused_ordering(877) 00:09:05.016 fused_ordering(878) 00:09:05.016 fused_ordering(879) 00:09:05.016 fused_ordering(880) 00:09:05.016 fused_ordering(881) 00:09:05.016 fused_ordering(882) 00:09:05.016 fused_ordering(883) 00:09:05.016 fused_ordering(884) 00:09:05.016 fused_ordering(885) 00:09:05.016 fused_ordering(886) 00:09:05.016 fused_ordering(887) 00:09:05.016 fused_ordering(888) 00:09:05.016 fused_ordering(889) 00:09:05.016 fused_ordering(890) 00:09:05.016 fused_ordering(891) 00:09:05.016 fused_ordering(892) 00:09:05.016 fused_ordering(893) 00:09:05.016 fused_ordering(894) 00:09:05.016 fused_ordering(895) 00:09:05.016 fused_ordering(896) 00:09:05.016 fused_ordering(897) 00:09:05.016 fused_ordering(898) 00:09:05.016 fused_ordering(899) 00:09:05.016 fused_ordering(900) 00:09:05.016 fused_ordering(901) 00:09:05.016 fused_ordering(902) 00:09:05.016 fused_ordering(903) 00:09:05.016 fused_ordering(904) 00:09:05.016 fused_ordering(905) 00:09:05.016 fused_ordering(906) 00:09:05.016 fused_ordering(907) 00:09:05.016 fused_ordering(908) 00:09:05.016 fused_ordering(909) 00:09:05.016 fused_ordering(910) 00:09:05.016 fused_ordering(911) 00:09:05.016 fused_ordering(912) 00:09:05.016 fused_ordering(913) 00:09:05.016 fused_ordering(914) 00:09:05.016 fused_ordering(915) 00:09:05.016 fused_ordering(916) 00:09:05.016 fused_ordering(917) 00:09:05.016 fused_ordering(918) 00:09:05.016 fused_ordering(919) 00:09:05.016 fused_ordering(920) 00:09:05.016 fused_ordering(921) 00:09:05.016 fused_ordering(922) 00:09:05.016 fused_ordering(923) 00:09:05.016 fused_ordering(924) 00:09:05.016 fused_ordering(925) 00:09:05.016 fused_ordering(926) 00:09:05.016 fused_ordering(927) 00:09:05.016 fused_ordering(928) 00:09:05.016 fused_ordering(929) 00:09:05.016 fused_ordering(930) 00:09:05.016 fused_ordering(931) 00:09:05.016 fused_ordering(932) 00:09:05.016 fused_ordering(933) 00:09:05.016 fused_ordering(934) 00:09:05.016 fused_ordering(935) 00:09:05.016 fused_ordering(936) 00:09:05.016 fused_ordering(937) 00:09:05.016 fused_ordering(938) 00:09:05.016 fused_ordering(939) 00:09:05.016 fused_ordering(940) 00:09:05.016 fused_ordering(941) 00:09:05.016 fused_ordering(942) 00:09:05.016 fused_ordering(943) 00:09:05.016 fused_ordering(944) 00:09:05.016 fused_ordering(945) 00:09:05.016 fused_ordering(946) 00:09:05.016 fused_ordering(947) 00:09:05.016 fused_ordering(948) 00:09:05.016 fused_ordering(949) 00:09:05.016 fused_ordering(950) 00:09:05.016 fused_ordering(951) 00:09:05.016 fused_ordering(952) 00:09:05.016 fused_ordering(953) 00:09:05.016 fused_ordering(954) 00:09:05.016 fused_ordering(955) 00:09:05.016 fused_ordering(956) 00:09:05.016 fused_ordering(957) 00:09:05.016 fused_ordering(958) 00:09:05.016 fused_ordering(959) 00:09:05.016 fused_ordering(960) 00:09:05.016 fused_ordering(961) 00:09:05.016 fused_ordering(962) 00:09:05.016 fused_ordering(963) 00:09:05.016 fused_ordering(964) 00:09:05.016 fused_ordering(965) 00:09:05.016 fused_ordering(966) 00:09:05.016 fused_ordering(967) 00:09:05.016 fused_ordering(968) 00:09:05.016 fused_ordering(969) 00:09:05.016 fused_ordering(970) 00:09:05.016 fused_ordering(971) 00:09:05.016 fused_ordering(972) 00:09:05.016 fused_ordering(973) 00:09:05.016 fused_ordering(974) 00:09:05.016 fused_ordering(975) 00:09:05.016 fused_ordering(976) 00:09:05.016 fused_ordering(977) 00:09:05.016 fused_ordering(978) 00:09:05.016 fused_ordering(979) 00:09:05.016 fused_ordering(980) 00:09:05.016 fused_ordering(981) 00:09:05.016 fused_ordering(982) 00:09:05.016 fused_ordering(983) 00:09:05.016 fused_ordering(984) 00:09:05.016 fused_ordering(985) 00:09:05.016 fused_ordering(986) 00:09:05.016 fused_ordering(987) 00:09:05.016 fused_ordering(988) 00:09:05.016 fused_ordering(989) 00:09:05.016 fused_ordering(990) 00:09:05.016 fused_ordering(991) 00:09:05.016 fused_ordering(992) 00:09:05.016 fused_ordering(993) 00:09:05.016 fused_ordering(994) 00:09:05.016 fused_ordering(995) 00:09:05.016 fused_ordering(996) 00:09:05.016 fused_ordering(997) 00:09:05.016 fused_ordering(998) 00:09:05.016 fused_ordering(999) 00:09:05.016 fused_ordering(1000) 00:09:05.016 fused_ordering(1001) 00:09:05.016 fused_ordering(1002) 00:09:05.016 fused_ordering(1003) 00:09:05.016 fused_ordering(1004) 00:09:05.016 fused_ordering(1005) 00:09:05.016 fused_ordering(1006) 00:09:05.016 fused_ordering(1007) 00:09:05.016 fused_ordering(1008) 00:09:05.016 fused_ordering(1009) 00:09:05.016 fused_ordering(1010) 00:09:05.016 fused_ordering(1011) 00:09:05.016 fused_ordering(1012) 00:09:05.016 fused_ordering(1013) 00:09:05.016 fused_ordering(1014) 00:09:05.016 fused_ordering(1015) 00:09:05.016 fused_ordering(1016) 00:09:05.016 fused_ordering(1017) 00:09:05.016 fused_ordering(1018) 00:09:05.016 fused_ordering(1019) 00:09:05.016 fused_ordering(1020) 00:09:05.016 fused_ordering(1021) 00:09:05.016 fused_ordering(1022) 00:09:05.016 fused_ordering(1023) 00:09:05.016 16:58:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:05.016 16:58:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:05.016 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.016 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.017 rmmod nvme_tcp 00:09:05.017 rmmod nvme_fabrics 00:09:05.017 rmmod nvme_keyring 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1055920 ']' 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1055920 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1055920 ']' 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1055920 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055920 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055920' 00:09:05.017 killing process with pid 1055920 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1055920 00:09:05.017 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1055920 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.276 16:58:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.837 16:58:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.837 00:09:07.837 real 0m7.454s 00:09:07.837 user 0m4.652s 00:09:07.837 sys 0m3.478s 00:09:07.837 16:58:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.837 16:58:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:07.837 ************************************ 00:09:07.837 END TEST nvmf_fused_ordering 00:09:07.837 ************************************ 00:09:07.837 16:58:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:07.837 16:58:06 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:07.837 16:58:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.837 16:58:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.837 16:58:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.837 ************************************ 00:09:07.837 START TEST nvmf_delete_subsystem 00:09:07.837 ************************************ 00:09:07.837 16:58:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:07.837 * Looking for test storage... 00:09:07.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.837 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.838 16:58:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:09.738 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:09.738 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:09.738 Found net devices under 0000:84:00.0: cvl_0_0 00:09:09.738 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:09.739 Found net devices under 0000:84:00.1: cvl_0_1 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:09:09.739 00:09:09.739 --- 10.0.0.2 ping statistics --- 00:09:09.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.739 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:09.739 00:09:09.739 --- 10.0.0.1 ping statistics --- 00:09:09.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.739 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1058249 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1058249 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1058249 ']' 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.739 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.739 [2024-07-12 16:58:09.370186] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:09.739 [2024-07-12 16:58:09.370258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.739 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.996 [2024-07-12 16:58:09.432791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.996 [2024-07-12 16:58:09.535006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.996 [2024-07-12 16:58:09.535061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.996 [2024-07-12 16:58:09.535074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.996 [2024-07-12 16:58:09.535084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.996 [2024-07-12 16:58:09.535093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.996 [2024-07-12 16:58:09.535182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.996 [2024-07-12 16:58:09.535188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.996 [2024-07-12 16:58:09.681571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.996 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.253 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.254 [2024-07-12 16:58:09.697786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.254 NULL1 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.254 Delay0 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1058288 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:10.254 16:58:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:10.254 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.254 [2024-07-12 16:58:09.772424] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:12.149 16:58:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.149 16:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.149 16:58:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 [2024-07-12 16:58:11.822114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc25c0 is same with the state(5) to be set 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Write completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 Read completed with error (sct=0, sc=8) 00:09:12.149 starting I/O failed: -6 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 starting I/O failed: -6 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 [2024-07-12 16:58:11.823733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff19000d450 is same with the state(5) to be set 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Write completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:12.150 Read completed with error (sct=0, sc=8) 00:09:13.106 [2024-07-12 16:58:12.788653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3ac0 is same with the state(5) to be set 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 [2024-07-12 16:58:12.823451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff19000d760 is same with the state(5) to be set 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 [2024-07-12 16:58:12.823635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff19000cfe0 is same with the state(5) to be set 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 [2024-07-12 16:58:12.826025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27a0 is same with the state(5) to be set 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Write completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 Read completed with error (sct=0, sc=8) 00:09:13.363 [2024-07-12 16:58:12.826591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc23e0 is same with the state(5) to be set 00:09:13.363 Initializing NVMe Controllers 00:09:13.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.363 Controller IO queue size 128, less than required. 00:09:13.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:13.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:13.364 Initialization complete. Launching workers. 00:09:13.364 ======================================================== 00:09:13.364 Latency(us) 00:09:13.364 Device Information : IOPS MiB/s Average min max 00:09:13.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 156.90 0.08 926773.82 462.90 1011978.91 00:09:13.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.89 0.08 952470.37 356.62 2000843.71 00:09:13.364 ======================================================== 00:09:13.364 Total : 314.78 0.15 939662.63 356.62 2000843.71 00:09:13.364 00:09:13.364 [2024-07-12 16:58:12.827092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3ac0 (9): Bad file descriptor 00:09:13.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:13.364 16:58:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.364 16:58:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:13.364 16:58:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1058288 00:09:13.364 16:58:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1058288 00:09:13.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1058288) - No such process 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1058288 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1058288 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1058288 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.928 [2024-07-12 16:58:13.351102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1058794 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:13.928 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:13.928 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.928 [2024-07-12 16:58:13.413583] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:14.185 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.185 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:14.185 16:58:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.747 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.747 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:14.747 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.311 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.311 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:15.311 16:58:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.875 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.875 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:15.875 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.440 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.440 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:16.440 16:58:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.697 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.697 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:16.697 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.954 Initializing NVMe Controllers 00:09:16.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.954 Controller IO queue size 128, less than required. 00:09:16.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:16.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:16.954 Initialization complete. Launching workers. 00:09:16.954 ======================================================== 00:09:16.954 Latency(us) 00:09:16.954 Device Information : IOPS MiB/s Average min max 00:09:16.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003257.81 1000161.26 1010874.39 00:09:16.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004748.06 1000182.24 1042208.86 00:09:16.954 ======================================================== 00:09:16.954 Total : 256.00 0.12 1004002.93 1000161.26 1042208.86 00:09:16.954 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058794 00:09:17.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1058794) - No such process 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1058794 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.212 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.212 rmmod nvme_tcp 00:09:17.212 rmmod nvme_fabrics 00:09:17.470 rmmod nvme_keyring 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1058249 ']' 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1058249 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1058249 ']' 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1058249 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1058249 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1058249' 00:09:17.470 killing process with pid 1058249 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1058249 00:09:17.470 16:58:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1058249 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.729 16:58:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.636 16:58:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:19.636 00:09:19.636 real 0m12.279s 00:09:19.636 user 0m27.441s 00:09:19.636 sys 0m3.051s 00:09:19.636 16:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.636 16:58:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.636 ************************************ 00:09:19.636 END TEST nvmf_delete_subsystem 00:09:19.636 ************************************ 00:09:19.636 16:58:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:19.636 16:58:19 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:19.636 16:58:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.636 16:58:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.636 16:58:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.636 ************************************ 00:09:19.636 START TEST nvmf_ns_masking 00:09:19.636 ************************************ 00:09:19.636 16:58:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:19.897 * Looking for test storage... 00:09:19.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:19.897 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f22d2422-81aa-4821-9d39-8f83b298f3c6 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f4ccd095-5fc3-4648-8468-c7964930b68b 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=37d778fd-705e-4e5a-83bd-fb47612bc08e 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.898 16:58:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:22.431 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:22.431 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:22.431 Found net devices under 0000:84:00.0: cvl_0_0 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:22.431 Found net devices under 0000:84:00.1: cvl_0_1 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.431 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:09:22.432 00:09:22.432 --- 10.0.0.2 ping statistics --- 00:09:22.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.432 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:22.432 00:09:22.432 --- 10.0.0.1 ping statistics --- 00:09:22.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.432 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1061167 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1061167 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1061167 ']' 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.432 16:58:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 [2024-07-12 16:58:21.740066] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:22.432 [2024-07-12 16:58:21.740161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.432 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.432 [2024-07-12 16:58:21.823316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.432 [2024-07-12 16:58:21.959507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.432 [2024-07-12 16:58:21.959570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.432 [2024-07-12 16:58:21.959596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.432 [2024-07-12 16:58:21.959618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.432 [2024-07-12 16:58:21.959637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.432 [2024-07-12 16:58:21.959681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.432 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:22.690 [2024-07-12 16:58:22.378165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.947 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:22.947 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:22.947 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:23.207 Malloc1 00:09:23.207 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:23.464 Malloc2 00:09:23.464 16:58:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.721 16:58:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:23.979 16:58:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.236 [2024-07-12 16:58:23.753044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.236 16:58:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37d778fd-705e-4e5a-83bd-fb47612bc08e -a 10.0.0.2 -s 4420 -i 4 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:24.237 16:58:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.766 16:58:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.766 [ 0]:0x1 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49f84268986548ecbf55f996a2f208b3 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49f84268986548ecbf55f996a2f208b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.766 [ 0]:0x1 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49f84268986548ecbf55f996a2f208b3 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49f84268986548ecbf55f996a2f208b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.766 [ 1]:0x2 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:26.766 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.024 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.281 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:27.539 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:27.539 16:58:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37d778fd-705e-4e5a-83bd-fb47612bc08e -a 10.0.0.2 -s 4420 -i 4 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:27.539 16:58:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.068 [ 0]:0x2 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.068 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.068 [ 0]:0x1 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49f84268986548ecbf55f996a2f208b3 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49f84268986548ecbf55f996a2f208b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.069 [ 1]:0x2 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.069 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:30.327 16:58:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:30.327 [ 0]:0x2 00:09:30.327 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.327 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:30.585 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:30.585 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.585 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:30.585 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.585 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37d778fd-705e-4e5a-83bd-fb47612bc08e -a 10.0.0.2 -s 4420 -i 4 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:30.843 16:58:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:33.428 [ 0]:0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49f84268986548ecbf55f996a2f208b3 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49f84268986548ecbf55f996a2f208b3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:33.428 [ 1]:0x2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:33.428 [ 0]:0x2 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.428 16:58:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:33.428 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:33.685 [2024-07-12 16:58:33.310165] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:33.685 request: 00:09:33.685 { 00:09:33.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.685 "nsid": 2, 00:09:33.685 "host": "nqn.2016-06.io.spdk:host1", 00:09:33.685 "method": "nvmf_ns_remove_host", 00:09:33.685 "req_id": 1 00:09:33.685 } 00:09:33.685 Got JSON-RPC error response 00:09:33.685 response: 00:09:33.685 { 00:09:33.685 "code": -32602, 00:09:33.685 "message": "Invalid parameters" 00:09:33.685 } 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.685 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:33.943 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:33.944 [ 0]:0x2 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=99d82966f80244958222615b168ce23b 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 99d82966f80244958222615b168ce23b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1062665 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1062665 /var/tmp/host.sock 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1062665 ']' 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:33.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.944 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 [2024-07-12 16:58:33.514587] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:33.944 [2024-07-12 16:58:33.514684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062665 ] 00:09:33.944 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.944 [2024-07-12 16:58:33.577880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.202 [2024-07-12 16:58:33.691404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.459 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.459 16:58:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:34.459 16:58:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.716 16:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.973 16:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f22d2422-81aa-4821-9d39-8f83b298f3c6 00:09:34.973 16:58:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:34.973 16:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F22D242281AA48219D398F83B298F3C6 -i 00:09:35.231 16:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f4ccd095-5fc3-4648-8468-c7964930b68b 00:09:35.231 16:58:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:35.231 16:58:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F4CCD0955FC346488468C7964930B68B -i 00:09:35.489 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:35.747 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:36.004 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:36.004 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:36.262 nvme0n1 00:09:36.262 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:36.262 16:58:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:36.827 nvme1n2 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:36.827 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:37.084 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f22d2422-81aa-4821-9d39-8f83b298f3c6 == \f\2\2\d\2\4\2\2\-\8\1\a\a\-\4\8\2\1\-\9\d\3\9\-\8\f\8\3\b\2\9\8\f\3\c\6 ]] 00:09:37.084 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:37.084 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:37.084 16:58:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f4ccd095-5fc3-4648-8468-c7964930b68b == \f\4\c\c\d\0\9\5\-\5\f\c\3\-\4\6\4\8\-\8\4\6\8\-\c\7\9\6\4\9\3\0\b\6\8\b ]] 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1062665 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1062665 ']' 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1062665 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.342 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1062665 00:09:37.600 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:37.600 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:37.600 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1062665' 00:09:37.600 killing process with pid 1062665 00:09:37.600 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1062665 00:09:37.600 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1062665 00:09:37.858 16:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.116 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.116 rmmod nvme_tcp 00:09:38.375 rmmod nvme_fabrics 00:09:38.375 rmmod nvme_keyring 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1061167 ']' 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1061167 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1061167 ']' 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1061167 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1061167 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1061167' 00:09:38.375 killing process with pid 1061167 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1061167 00:09:38.375 16:58:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1061167 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.634 16:58:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.163 16:58:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.163 00:09:41.163 real 0m20.909s 00:09:41.163 user 0m27.230s 00:09:41.163 sys 0m4.183s 00:09:41.163 16:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.163 16:58:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.163 ************************************ 00:09:41.163 END TEST nvmf_ns_masking 00:09:41.163 ************************************ 00:09:41.163 16:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:41.163 16:58:40 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:41.163 16:58:40 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:41.163 16:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:41.163 16:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.163 16:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:41.163 ************************************ 00:09:41.163 START TEST nvmf_nvme_cli 00:09:41.163 ************************************ 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:41.163 * Looking for test storage... 00:09:41.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.163 16:58:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:43.065 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:43.065 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:43.065 Found net devices under 0000:84:00.0: cvl_0_0 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:43.065 Found net devices under 0000:84:00.1: cvl_0_1 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:09:43.065 00:09:43.065 --- 10.0.0.2 ping statistics --- 00:09:43.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.065 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:43.065 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:09:43.065 00:09:43.066 --- 10.0.0.1 ping statistics --- 00:09:43.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.066 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1065169 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1065169 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1065169 ']' 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.066 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.066 [2024-07-12 16:58:42.566618] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:43.066 [2024-07-12 16:58:42.566709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.066 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.066 [2024-07-12 16:58:42.633777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.066 [2024-07-12 16:58:42.750950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.066 [2024-07-12 16:58:42.751002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.066 [2024-07-12 16:58:42.751017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.066 [2024-07-12 16:58:42.751042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.066 [2024-07-12 16:58:42.751053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.066 [2024-07-12 16:58:42.751160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.066 [2024-07-12 16:58:42.751219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.066 [2024-07-12 16:58:42.751288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.066 [2024-07-12 16:58:42.751290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.323 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 [2024-07-12 16:58:42.917735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 Malloc0 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 Malloc1 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 [2024-07-12 16:58:43.003674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.324 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:09:43.580 00:09:43.580 Discovery Log Number of Records 2, Generation counter 2 00:09:43.580 =====Discovery Log Entry 0====== 00:09:43.580 trtype: tcp 00:09:43.580 adrfam: ipv4 00:09:43.580 subtype: current discovery subsystem 00:09:43.580 treq: not required 00:09:43.580 portid: 0 00:09:43.580 trsvcid: 4420 00:09:43.580 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:43.580 traddr: 10.0.0.2 00:09:43.580 eflags: explicit discovery connections, duplicate discovery information 00:09:43.580 sectype: none 00:09:43.580 =====Discovery Log Entry 1====== 00:09:43.580 trtype: tcp 00:09:43.580 adrfam: ipv4 00:09:43.580 subtype: nvme subsystem 00:09:43.580 treq: not required 00:09:43.580 portid: 0 00:09:43.580 trsvcid: 4420 00:09:43.580 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:43.580 traddr: 10.0.0.2 00:09:43.580 eflags: none 00:09:43.580 sectype: none 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:43.580 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:44.143 16:58:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:46.662 /dev/nvme0n1 ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.662 rmmod nvme_tcp 00:09:46.662 rmmod nvme_fabrics 00:09:46.662 rmmod nvme_keyring 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.662 16:58:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1065169 ']' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1065169 ']' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1065169' 00:09:46.662 killing process with pid 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1065169 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.662 16:58:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.193 16:58:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.193 00:09:49.193 real 0m8.105s 00:09:49.193 user 0m14.666s 00:09:49.193 sys 0m2.211s 00:09:49.193 16:58:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.193 16:58:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:49.193 ************************************ 00:09:49.193 END TEST nvmf_nvme_cli 00:09:49.193 ************************************ 00:09:49.193 16:58:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.193 16:58:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:49.193 16:58:48 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.193 16:58:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.193 16:58:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.193 16:58:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.193 ************************************ 00:09:49.193 START TEST nvmf_vfio_user 00:09:49.193 ************************************ 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.193 * Looking for test storage... 00:09:49.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.193 16:58:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1066093 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1066093' 00:09:49.194 Process pid: 1066093 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1066093 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1066093 ']' 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:49.194 [2024-07-12 16:58:48.565762] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:49.194 [2024-07-12 16:58:48.565864] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.194 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.194 [2024-07-12 16:58:48.623458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.194 [2024-07-12 16:58:48.729372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.194 [2024-07-12 16:58:48.729427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.194 [2024-07-12 16:58:48.729450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.194 [2024-07-12 16:58:48.729461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.194 [2024-07-12 16:58:48.729470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.194 [2024-07-12 16:58:48.729561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.194 [2024-07-12 16:58:48.729677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.194 [2024-07-12 16:58:48.729786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.194 [2024-07-12 16:58:48.729790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:49.194 16:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:50.566 16:58:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:50.566 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:50.566 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:50.566 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:50.566 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:50.566 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:50.824 Malloc1 00:09:50.824 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:51.081 16:58:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:51.339 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:51.597 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.597 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:51.597 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:51.855 Malloc2 00:09:51.855 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:52.419 16:58:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:52.419 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:52.677 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.677 [2024-07-12 16:58:52.320126] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:09:52.677 [2024-07-12 16:58:52.320170] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066517 ] 00:09:52.677 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.677 [2024-07-12 16:58:52.354180] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:52.677 [2024-07-12 16:58:52.362198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.677 [2024-07-12 16:58:52.362226] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f70e55d3000 00:09:52.677 [2024-07-12 16:58:52.363194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.364191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.365196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.366199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.367210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.368211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.369234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.677 [2024-07-12 16:58:52.370223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.935 [2024-07-12 16:58:52.371229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.935 [2024-07-12 16:58:52.371249] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f70e55c8000 00:09:52.935 [2024-07-12 16:58:52.372460] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.935 [2024-07-12 16:58:52.388441] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:52.935 [2024-07-12 16:58:52.388489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:52.935 [2024-07-12 16:58:52.393366] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.935 [2024-07-12 16:58:52.393426] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:52.935 [2024-07-12 16:58:52.393525] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:52.935 [2024-07-12 16:58:52.393563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:52.935 [2024-07-12 16:58:52.393574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:52.935 [2024-07-12 16:58:52.394355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:52.935 [2024-07-12 16:58:52.394377] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:52.935 [2024-07-12 16:58:52.394389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:52.935 [2024-07-12 16:58:52.395360] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.935 [2024-07-12 16:58:52.395378] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:52.935 [2024-07-12 16:58:52.395391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.396368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:52.935 [2024-07-12 16:58:52.396386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.397375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:52.935 [2024-07-12 16:58:52.397395] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:52.935 [2024-07-12 16:58:52.397404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.397415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.397525] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:52.935 [2024-07-12 16:58:52.397532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.397541] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:52.935 [2024-07-12 16:58:52.398379] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:52.935 [2024-07-12 16:58:52.399378] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:52.935 [2024-07-12 16:58:52.400386] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.935 [2024-07-12 16:58:52.401382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.935 [2024-07-12 16:58:52.401488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:52.935 [2024-07-12 16:58:52.402399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:52.935 [2024-07-12 16:58:52.402418] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:52.935 [2024-07-12 16:58:52.402430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:52.935 [2024-07-12 16:58:52.402468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402497] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.935 [2024-07-12 16:58:52.402507] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.935 [2024-07-12 16:58:52.402530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.402597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.402616] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:52.935 [2024-07-12 16:58:52.402628] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:52.935 [2024-07-12 16:58:52.402636] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:52.935 [2024-07-12 16:58:52.402644] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:52.935 [2024-07-12 16:58:52.402652] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:52.935 [2024-07-12 16:58:52.402659] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:52.935 [2024-07-12 16:58:52.402667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.402710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.402755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.935 [2024-07-12 16:58:52.402771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.935 [2024-07-12 16:58:52.402783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.935 [2024-07-12 16:58:52.402795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.935 [2024-07-12 16:58:52.402804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.402847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.402859] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:52.935 [2024-07-12 16:58:52.402875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.402912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.402923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.402990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403021] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:52.935 [2024-07-12 16:58:52.403029] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:52.935 [2024-07-12 16:58:52.403053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.403070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.403090] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:52.935 [2024-07-12 16:58:52.403125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403153] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.935 [2024-07-12 16:58:52.403160] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.935 [2024-07-12 16:58:52.403170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.403196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.403219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.935 [2024-07-12 16:58:52.403253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.935 [2024-07-12 16:58:52.403262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.935 [2024-07-12 16:58:52.403276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:52.935 [2024-07-12 16:58:52.403290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:52.935 [2024-07-12 16:58:52.403346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:52.936 [2024-07-12 16:58:52.403354] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:52.936 [2024-07-12 16:58:52.403362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:52.936 [2024-07-12 16:58:52.403370] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:52.936 [2024-07-12 16:58:52.403397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403523] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:52.936 [2024-07-12 16:58:52.403532] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:52.936 [2024-07-12 16:58:52.403539] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:52.936 [2024-07-12 16:58:52.403544] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:52.936 [2024-07-12 16:58:52.403553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:52.936 [2024-07-12 16:58:52.403565] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:52.936 [2024-07-12 16:58:52.403572] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:52.936 [2024-07-12 16:58:52.403581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403591] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:52.936 [2024-07-12 16:58:52.403599] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.936 [2024-07-12 16:58:52.403607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403622] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:52.936 [2024-07-12 16:58:52.403630] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:52.936 [2024-07-12 16:58:52.403639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:52.936 ===================================================== 00:09:52.936 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:52.936 ===================================================== 00:09:52.936 Controller Capabilities/Features 00:09:52.936 ================================ 00:09:52.936 Vendor ID: 4e58 00:09:52.936 Subsystem Vendor ID: 4e58 00:09:52.936 Serial Number: SPDK1 00:09:52.936 Model Number: SPDK bdev Controller 00:09:52.936 Firmware Version: 24.09 00:09:52.936 Recommended Arb Burst: 6 00:09:52.936 IEEE OUI Identifier: 8d 6b 50 00:09:52.936 Multi-path I/O 00:09:52.936 May have multiple subsystem ports: Yes 00:09:52.936 May have multiple controllers: Yes 00:09:52.936 Associated with SR-IOV VF: No 00:09:52.936 Max Data Transfer Size: 131072 00:09:52.936 Max Number of Namespaces: 32 00:09:52.936 Max Number of I/O Queues: 127 00:09:52.936 NVMe Specification Version (VS): 1.3 00:09:52.936 NVMe Specification Version (Identify): 1.3 00:09:52.936 Maximum Queue Entries: 256 00:09:52.936 Contiguous Queues Required: Yes 00:09:52.936 Arbitration Mechanisms Supported 00:09:52.936 Weighted Round Robin: Not Supported 00:09:52.936 Vendor Specific: Not Supported 00:09:52.936 Reset Timeout: 15000 ms 00:09:52.936 Doorbell Stride: 4 bytes 00:09:52.936 NVM Subsystem Reset: Not Supported 00:09:52.936 Command Sets Supported 00:09:52.936 NVM Command Set: Supported 00:09:52.936 Boot Partition: Not Supported 00:09:52.936 Memory Page Size Minimum: 4096 bytes 00:09:52.936 Memory Page Size Maximum: 4096 bytes 00:09:52.936 Persistent Memory Region: Not Supported 00:09:52.936 Optional Asynchronous Events Supported 00:09:52.936 Namespace Attribute Notices: Supported 00:09:52.936 Firmware Activation Notices: Not Supported 00:09:52.936 ANA Change Notices: Not Supported 00:09:52.936 PLE Aggregate Log Change Notices: Not Supported 00:09:52.936 LBA Status Info Alert Notices: Not Supported 00:09:52.936 EGE Aggregate Log Change Notices: Not Supported 00:09:52.936 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.936 Zone Descriptor Change Notices: Not Supported 00:09:52.936 Discovery Log Change Notices: Not Supported 00:09:52.936 Controller Attributes 00:09:52.936 128-bit Host Identifier: Supported 00:09:52.936 Non-Operational Permissive Mode: Not Supported 00:09:52.936 NVM Sets: Not Supported 00:09:52.936 Read Recovery Levels: Not Supported 00:09:52.936 Endurance Groups: Not Supported 00:09:52.936 Predictable Latency Mode: Not Supported 00:09:52.936 Traffic Based Keep ALive: Not Supported 00:09:52.936 Namespace Granularity: Not Supported 00:09:52.936 SQ Associations: Not Supported 00:09:52.936 UUID List: Not Supported 00:09:52.936 Multi-Domain Subsystem: Not Supported 00:09:52.936 Fixed Capacity Management: Not Supported 00:09:52.936 Variable Capacity Management: Not Supported 00:09:52.936 Delete Endurance Group: Not Supported 00:09:52.936 Delete NVM Set: Not Supported 00:09:52.936 Extended LBA Formats Supported: Not Supported 00:09:52.936 Flexible Data Placement Supported: Not Supported 00:09:52.936 00:09:52.936 Controller Memory Buffer Support 00:09:52.936 ================================ 00:09:52.936 Supported: No 00:09:52.936 00:09:52.936 Persistent Memory Region Support 00:09:52.936 ================================ 00:09:52.936 Supported: No 00:09:52.936 00:09:52.936 Admin Command Set Attributes 00:09:52.936 ============================ 00:09:52.936 Security Send/Receive: Not Supported 00:09:52.936 Format NVM: Not Supported 00:09:52.936 Firmware Activate/Download: Not Supported 00:09:52.936 Namespace Management: Not Supported 00:09:52.936 Device Self-Test: Not Supported 00:09:52.936 Directives: Not Supported 00:09:52.936 NVMe-MI: Not Supported 00:09:52.936 Virtualization Management: Not Supported 00:09:52.936 Doorbell Buffer Config: Not Supported 00:09:52.936 Get LBA Status Capability: Not Supported 00:09:52.936 Command & Feature Lockdown Capability: Not Supported 00:09:52.936 Abort Command Limit: 4 00:09:52.936 Async Event Request Limit: 4 00:09:52.936 Number of Firmware Slots: N/A 00:09:52.936 Firmware Slot 1 Read-Only: N/A 00:09:52.936 Firmware Activation Without Reset: N/A 00:09:52.936 Multiple Update Detection Support: N/A 00:09:52.936 Firmware Update Granularity: No Information Provided 00:09:52.936 Per-Namespace SMART Log: No 00:09:52.936 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.936 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:52.936 Command Effects Log Page: Supported 00:09:52.936 Get Log Page Extended Data: Supported 00:09:52.936 Telemetry Log Pages: Not Supported 00:09:52.936 Persistent Event Log Pages: Not Supported 00:09:52.936 Supported Log Pages Log Page: May Support 00:09:52.936 Commands Supported & Effects Log Page: Not Supported 00:09:52.936 Feature Identifiers & Effects Log Page:May Support 00:09:52.936 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.936 Data Area 4 for Telemetry Log: Not Supported 00:09:52.936 Error Log Page Entries Supported: 128 00:09:52.936 Keep Alive: Supported 00:09:52.936 Keep Alive Granularity: 10000 ms 00:09:52.936 00:09:52.936 NVM Command Set Attributes 00:09:52.936 ========================== 00:09:52.936 Submission Queue Entry Size 00:09:52.936 Max: 64 00:09:52.936 Min: 64 00:09:52.936 Completion Queue Entry Size 00:09:52.936 Max: 16 00:09:52.936 Min: 16 00:09:52.936 Number of Namespaces: 32 00:09:52.936 Compare Command: Supported 00:09:52.936 Write Uncorrectable Command: Not Supported 00:09:52.936 Dataset Management Command: Supported 00:09:52.936 Write Zeroes Command: Supported 00:09:52.936 Set Features Save Field: Not Supported 00:09:52.936 Reservations: Not Supported 00:09:52.936 Timestamp: Not Supported 00:09:52.936 Copy: Supported 00:09:52.936 Volatile Write Cache: Present 00:09:52.936 Atomic Write Unit (Normal): 1 00:09:52.936 Atomic Write Unit (PFail): 1 00:09:52.936 Atomic Compare & Write Unit: 1 00:09:52.936 Fused Compare & Write: Supported 00:09:52.936 Scatter-Gather List 00:09:52.936 SGL Command Set: Supported (Dword aligned) 00:09:52.936 SGL Keyed: Not Supported 00:09:52.936 SGL Bit Bucket Descriptor: Not Supported 00:09:52.936 SGL Metadata Pointer: Not Supported 00:09:52.936 Oversized SGL: Not Supported 00:09:52.936 SGL Metadata Address: Not Supported 00:09:52.936 SGL Offset: Not Supported 00:09:52.936 Transport SGL Data Block: Not Supported 00:09:52.936 Replay Protected Memory Block: Not Supported 00:09:52.936 00:09:52.936 Firmware Slot Information 00:09:52.936 ========================= 00:09:52.936 Active slot: 1 00:09:52.936 Slot 1 Firmware Revision: 24.09 00:09:52.936 00:09:52.936 00:09:52.936 Commands Supported and Effects 00:09:52.936 ============================== 00:09:52.936 Admin Commands 00:09:52.936 -------------- 00:09:52.936 Get Log Page (02h): Supported 00:09:52.936 Identify (06h): Supported 00:09:52.936 Abort (08h): Supported 00:09:52.936 Set Features (09h): Supported 00:09:52.936 Get Features (0Ah): Supported 00:09:52.936 Asynchronous Event Request (0Ch): Supported 00:09:52.936 Keep Alive (18h): Supported 00:09:52.936 I/O Commands 00:09:52.936 ------------ 00:09:52.936 Flush (00h): Supported LBA-Change 00:09:52.936 Write (01h): Supported LBA-Change 00:09:52.936 Read (02h): Supported 00:09:52.936 Compare (05h): Supported 00:09:52.936 Write Zeroes (08h): Supported LBA-Change 00:09:52.936 Dataset Management (09h): Supported LBA-Change 00:09:52.936 Copy (19h): Supported LBA-Change 00:09:52.936 00:09:52.936 Error Log 00:09:52.936 ========= 00:09:52.936 00:09:52.936 Arbitration 00:09:52.936 =========== 00:09:52.936 Arbitration Burst: 1 00:09:52.936 00:09:52.936 Power Management 00:09:52.936 ================ 00:09:52.936 Number of Power States: 1 00:09:52.936 Current Power State: Power State #0 00:09:52.936 Power State #0: 00:09:52.936 Max Power: 0.00 W 00:09:52.936 Non-Operational State: Operational 00:09:52.936 Entry Latency: Not Reported 00:09:52.936 Exit Latency: Not Reported 00:09:52.936 Relative Read Throughput: 0 00:09:52.936 Relative Read Latency: 0 00:09:52.936 Relative Write Throughput: 0 00:09:52.936 Relative Write Latency: 0 00:09:52.936 Idle Power: Not Reported 00:09:52.936 Active Power: Not Reported 00:09:52.936 Non-Operational Permissive Mode: Not Supported 00:09:52.936 00:09:52.936 Health Information 00:09:52.936 ================== 00:09:52.936 Critical Warnings: 00:09:52.936 Available Spare Space: OK 00:09:52.936 Temperature: OK 00:09:52.936 Device Reliability: OK 00:09:52.936 Read Only: No 00:09:52.936 Volatile Memory Backup: OK 00:09:52.936 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:52.936 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:52.936 Available Spare: 0% 00:09:52.936 Available Sp[2024-07-12 16:58:52.403845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:52.936 [2024-07-12 16:58:52.403862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403908] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:52.936 [2024-07-12 16:58:52.403927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.403958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.936 [2024-07-12 16:58:52.407749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.936 [2024-07-12 16:58:52.407772] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:52.936 [2024-07-12 16:58:52.408424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.936 [2024-07-12 16:58:52.408503] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:52.936 [2024-07-12 16:58:52.408517] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:52.936 [2024-07-12 16:58:52.409437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:52.936 [2024-07-12 16:58:52.409462] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:52.937 [2024-07-12 16:58:52.409519] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:52.937 [2024-07-12 16:58:52.411478] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.937 are Threshold: 0% 00:09:52.937 Life Percentage Used: 0% 00:09:52.937 Data Units Read: 0 00:09:52.937 Data Units Written: 0 00:09:52.937 Host Read Commands: 0 00:09:52.937 Host Write Commands: 0 00:09:52.937 Controller Busy Time: 0 minutes 00:09:52.937 Power Cycles: 0 00:09:52.937 Power On Hours: 0 hours 00:09:52.937 Unsafe Shutdowns: 0 00:09:52.937 Unrecoverable Media Errors: 0 00:09:52.937 Lifetime Error Log Entries: 0 00:09:52.937 Warning Temperature Time: 0 minutes 00:09:52.937 Critical Temperature Time: 0 minutes 00:09:52.937 00:09:52.937 Number of Queues 00:09:52.937 ================ 00:09:52.937 Number of I/O Submission Queues: 127 00:09:52.937 Number of I/O Completion Queues: 127 00:09:52.937 00:09:52.937 Active Namespaces 00:09:52.937 ================= 00:09:52.937 Namespace ID:1 00:09:52.937 Error Recovery Timeout: Unlimited 00:09:52.937 Command Set Identifier: NVM (00h) 00:09:52.937 Deallocate: Supported 00:09:52.937 Deallocated/Unwritten Error: Not Supported 00:09:52.937 Deallocated Read Value: Unknown 00:09:52.937 Deallocate in Write Zeroes: Not Supported 00:09:52.937 Deallocated Guard Field: 0xFFFF 00:09:52.937 Flush: Supported 00:09:52.937 Reservation: Supported 00:09:52.937 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.937 Size (in LBAs): 131072 (0GiB) 00:09:52.937 Capacity (in LBAs): 131072 (0GiB) 00:09:52.937 Utilization (in LBAs): 131072 (0GiB) 00:09:52.937 NGUID: CC816DDCDB224336A4691A75559F2CDC 00:09:52.937 UUID: cc816ddc-db22-4336-a469-1a75559f2cdc 00:09:52.937 Thin Provisioning: Not Supported 00:09:52.937 Per-NS Atomic Units: Yes 00:09:52.937 Atomic Boundary Size (Normal): 0 00:09:52.937 Atomic Boundary Size (PFail): 0 00:09:52.937 Atomic Boundary Offset: 0 00:09:52.937 Maximum Single Source Range Length: 65535 00:09:52.937 Maximum Copy Length: 65535 00:09:52.937 Maximum Source Range Count: 1 00:09:52.937 NGUID/EUI64 Never Reused: No 00:09:52.937 Namespace Write Protected: No 00:09:52.937 Number of LBA Formats: 1 00:09:52.937 Current LBA Format: LBA Format #00 00:09:52.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.937 00:09:52.937 16:58:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:52.937 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.192 [2024-07-12 16:58:52.641591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.448 Initializing NVMe Controllers 00:09:58.448 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.448 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:58.448 Initialization complete. Launching workers. 00:09:58.448 ======================================================== 00:09:58.448 Latency(us) 00:09:58.448 Device Information : IOPS MiB/s Average min max 00:09:58.448 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34452.58 134.58 3714.71 1172.46 10569.27 00:09:58.448 ======================================================== 00:09:58.448 Total : 34452.58 134.58 3714.71 1172.46 10569.27 00:09:58.448 00:09:58.448 [2024-07-12 16:58:57.663377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.448 16:58:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.448 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.448 [2024-07-12 16:58:57.904558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.763 Initializing NVMe Controllers 00:10:03.763 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.763 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:03.763 Initialization complete. Launching workers. 00:10:03.763 ======================================================== 00:10:03.763 Latency(us) 00:10:03.763 Device Information : IOPS MiB/s Average min max 00:10:03.763 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15991.40 62.47 8014.29 5982.37 15853.06 00:10:03.763 ======================================================== 00:10:03.763 Total : 15991.40 62.47 8014.29 5982.37 15853.06 00:10:03.763 00:10:03.763 [2024-07-12 16:59:02.940674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.763 16:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:03.763 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.763 [2024-07-12 16:59:03.153752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.022 [2024-07-12 16:59:08.232088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.022 Initializing NVMe Controllers 00:10:09.022 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.022 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.022 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:09.022 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:09.022 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:09.022 Initialization complete. Launching workers. 00:10:09.022 Starting thread on core 2 00:10:09.022 Starting thread on core 3 00:10:09.022 Starting thread on core 1 00:10:09.022 16:59:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:09.022 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.022 [2024-07-12 16:59:08.534257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.314 [2024-07-12 16:59:11.921044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.314 Initializing NVMe Controllers 00:10:12.314 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.314 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.314 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:12.314 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:12.314 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:12.314 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:12.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.314 Initialization complete. Launching workers. 00:10:12.314 Starting thread on core 1 with urgent priority queue 00:10:12.314 Starting thread on core 2 with urgent priority queue 00:10:12.314 Starting thread on core 3 with urgent priority queue 00:10:12.314 Starting thread on core 0 with urgent priority queue 00:10:12.314 SPDK bdev Controller (SPDK1 ) core 0: 4680.67 IO/s 21.36 secs/100000 ios 00:10:12.314 SPDK bdev Controller (SPDK1 ) core 1: 4281.33 IO/s 23.36 secs/100000 ios 00:10:12.314 SPDK bdev Controller (SPDK1 ) core 2: 5128.67 IO/s 19.50 secs/100000 ios 00:10:12.314 SPDK bdev Controller (SPDK1 ) core 3: 4348.00 IO/s 23.00 secs/100000 ios 00:10:12.314 ======================================================== 00:10:12.314 00:10:12.314 16:59:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.573 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.573 [2024-07-12 16:59:12.229319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.573 Initializing NVMe Controllers 00:10:12.573 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.573 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.573 Namespace ID: 1 size: 0GB 00:10:12.573 Initialization complete. 00:10:12.573 INFO: using host memory buffer for IO 00:10:12.573 Hello world! 00:10:12.573 [2024-07-12 16:59:12.262899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.831 16:59:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.831 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.090 [2024-07-12 16:59:12.547174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.044 Initializing NVMe Controllers 00:10:14.044 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.044 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.044 Initialization complete. Launching workers. 00:10:14.044 submit (in ns) avg, min, max = 6327.0, 3533.3, 4021420.0 00:10:14.044 complete (in ns) avg, min, max = 24677.5, 2068.9, 4018726.7 00:10:14.044 00:10:14.044 Submit histogram 00:10:14.044 ================ 00:10:14.044 Range in us Cumulative Count 00:10:14.044 3.532 - 3.556: 0.0898% ( 12) 00:10:14.044 3.556 - 3.579: 0.4341% ( 46) 00:10:14.044 3.579 - 3.603: 1.5192% ( 145) 00:10:14.044 3.603 - 3.627: 4.2958% ( 371) 00:10:14.044 3.627 - 3.650: 10.1856% ( 787) 00:10:14.044 3.650 - 3.674: 17.7743% ( 1014) 00:10:14.044 3.674 - 3.698: 28.6260% ( 1450) 00:10:14.044 3.698 - 3.721: 38.7217% ( 1349) 00:10:14.044 3.721 - 3.745: 47.9644% ( 1235) 00:10:14.044 3.745 - 3.769: 54.2134% ( 835) 00:10:14.044 3.769 - 3.793: 59.8788% ( 757) 00:10:14.044 3.793 - 3.816: 64.4514% ( 611) 00:10:14.044 3.816 - 3.840: 67.9240% ( 464) 00:10:14.044 3.840 - 3.864: 71.4863% ( 476) 00:10:14.044 3.864 - 3.887: 74.5547% ( 410) 00:10:14.044 3.887 - 3.911: 77.8476% ( 440) 00:10:14.044 3.911 - 3.935: 81.6644% ( 510) 00:10:14.044 3.935 - 3.959: 84.7702% ( 415) 00:10:14.044 3.959 - 3.982: 87.1801% ( 322) 00:10:14.044 3.982 - 4.006: 89.4103% ( 298) 00:10:14.044 4.006 - 4.030: 91.1016% ( 226) 00:10:14.044 4.030 - 4.053: 92.2991% ( 160) 00:10:14.044 4.053 - 4.077: 93.3393% ( 139) 00:10:14.044 4.077 - 4.101: 94.1551% ( 109) 00:10:14.044 4.101 - 4.124: 94.8211% ( 89) 00:10:14.044 4.124 - 4.148: 95.5097% ( 92) 00:10:14.044 4.148 - 4.172: 96.1159% ( 81) 00:10:14.044 4.172 - 4.196: 96.4676% ( 47) 00:10:14.044 4.196 - 4.219: 96.8343% ( 49) 00:10:14.044 4.219 - 4.243: 97.0289% ( 26) 00:10:14.044 4.243 - 4.267: 97.1486% ( 16) 00:10:14.044 4.267 - 4.290: 97.2459% ( 13) 00:10:14.044 4.290 - 4.314: 97.3432% ( 13) 00:10:14.044 4.314 - 4.338: 97.4555% ( 15) 00:10:14.044 4.338 - 4.361: 97.5303% ( 10) 00:10:14.044 4.361 - 4.385: 97.6126% ( 11) 00:10:14.044 4.385 - 4.409: 97.6875% ( 10) 00:10:14.044 4.409 - 4.433: 97.7548% ( 9) 00:10:14.044 4.433 - 4.456: 97.7848% ( 4) 00:10:14.044 4.456 - 4.480: 97.8222% ( 5) 00:10:14.044 4.480 - 4.504: 97.8446% ( 3) 00:10:14.044 4.575 - 4.599: 97.8596% ( 2) 00:10:14.044 4.622 - 4.646: 97.8671% ( 1) 00:10:14.044 4.646 - 4.670: 97.8821% ( 2) 00:10:14.044 4.670 - 4.693: 97.8970% ( 2) 00:10:14.044 4.717 - 4.741: 97.9195% ( 3) 00:10:14.044 4.741 - 4.764: 97.9344% ( 2) 00:10:14.044 4.764 - 4.788: 97.9644% ( 4) 00:10:14.044 4.788 - 4.812: 97.9868% ( 3) 00:10:14.044 4.812 - 4.836: 98.0018% ( 2) 00:10:14.044 4.836 - 4.859: 98.0392% ( 5) 00:10:14.044 4.859 - 4.883: 98.0991% ( 8) 00:10:14.044 4.883 - 4.907: 98.1664% ( 9) 00:10:14.044 4.907 - 4.930: 98.2188% ( 7) 00:10:14.044 4.930 - 4.954: 98.2562% ( 5) 00:10:14.044 4.954 - 4.978: 98.3086% ( 7) 00:10:14.044 4.978 - 5.001: 98.3535% ( 6) 00:10:14.044 5.001 - 5.025: 98.4059% ( 7) 00:10:14.044 5.025 - 5.049: 98.4209% ( 2) 00:10:14.044 5.049 - 5.073: 98.4433% ( 3) 00:10:14.044 5.073 - 5.096: 98.4658% ( 3) 00:10:14.044 5.096 - 5.120: 98.5032% ( 5) 00:10:14.044 5.120 - 5.144: 98.5481% ( 6) 00:10:14.044 5.144 - 5.167: 98.5631% ( 2) 00:10:14.044 5.167 - 5.191: 98.5855% ( 3) 00:10:14.044 5.191 - 5.215: 98.6080% ( 3) 00:10:14.044 5.215 - 5.239: 98.6230% ( 2) 00:10:14.044 5.239 - 5.262: 98.6304% ( 1) 00:10:14.044 5.286 - 5.310: 98.6529% ( 3) 00:10:14.044 5.333 - 5.357: 98.6679% ( 2) 00:10:14.044 5.452 - 5.476: 98.6753% ( 1) 00:10:14.044 5.476 - 5.499: 98.6828% ( 1) 00:10:14.044 5.547 - 5.570: 98.6903% ( 1) 00:10:14.044 5.594 - 5.618: 98.7053% ( 2) 00:10:14.044 6.400 - 6.447: 98.7128% ( 1) 00:10:14.044 6.827 - 6.874: 98.7277% ( 2) 00:10:14.044 6.969 - 7.016: 98.7352% ( 1) 00:10:14.044 7.064 - 7.111: 98.7427% ( 1) 00:10:14.044 7.206 - 7.253: 98.7502% ( 1) 00:10:14.044 7.253 - 7.301: 98.7577% ( 1) 00:10:14.044 7.443 - 7.490: 98.7652% ( 1) 00:10:14.045 7.538 - 7.585: 98.7726% ( 1) 00:10:14.045 7.727 - 7.775: 98.7801% ( 1) 00:10:14.045 7.775 - 7.822: 98.7951% ( 2) 00:10:14.045 7.870 - 7.917: 98.8026% ( 1) 00:10:14.045 7.964 - 8.012: 98.8101% ( 1) 00:10:14.045 8.012 - 8.059: 98.8175% ( 1) 00:10:14.045 8.059 - 8.107: 98.8250% ( 1) 00:10:14.045 8.107 - 8.154: 98.8400% ( 2) 00:10:14.045 8.154 - 8.201: 98.8699% ( 4) 00:10:14.045 8.249 - 8.296: 98.8774% ( 1) 00:10:14.045 8.296 - 8.344: 98.8849% ( 1) 00:10:14.045 8.344 - 8.391: 98.8924% ( 1) 00:10:14.045 8.486 - 8.533: 98.8999% ( 1) 00:10:14.045 8.533 - 8.581: 98.9148% ( 2) 00:10:14.045 8.581 - 8.628: 98.9298% ( 2) 00:10:14.045 8.628 - 8.676: 98.9448% ( 2) 00:10:14.045 8.723 - 8.770: 98.9597% ( 2) 00:10:14.045 8.818 - 8.865: 98.9747% ( 2) 00:10:14.045 9.007 - 9.055: 98.9972% ( 3) 00:10:14.045 9.102 - 9.150: 99.0046% ( 1) 00:10:14.045 9.150 - 9.197: 99.0121% ( 1) 00:10:14.045 9.197 - 9.244: 99.0196% ( 1) 00:10:14.045 9.339 - 9.387: 99.0346% ( 2) 00:10:14.045 9.434 - 9.481: 99.0421% ( 1) 00:10:14.045 9.481 - 9.529: 99.0570% ( 2) 00:10:14.045 9.576 - 9.624: 99.0645% ( 1) 00:10:14.045 9.624 - 9.671: 99.0870% ( 3) 00:10:14.045 9.671 - 9.719: 99.0944% ( 1) 00:10:14.045 9.719 - 9.766: 99.1094% ( 2) 00:10:14.045 9.766 - 9.813: 99.1244% ( 2) 00:10:14.045 9.908 - 9.956: 99.1319% ( 1) 00:10:14.045 9.956 - 10.003: 99.1468% ( 2) 00:10:14.045 10.003 - 10.050: 99.1543% ( 1) 00:10:14.045 10.145 - 10.193: 99.1618% ( 1) 00:10:14.045 10.193 - 10.240: 99.1693% ( 1) 00:10:14.045 10.667 - 10.714: 99.1768% ( 1) 00:10:14.045 10.714 - 10.761: 99.1843% ( 1) 00:10:14.045 10.761 - 10.809: 99.1992% ( 2) 00:10:14.045 11.283 - 11.330: 99.2067% ( 1) 00:10:14.045 11.378 - 11.425: 99.2142% ( 1) 00:10:14.045 11.899 - 11.947: 99.2217% ( 1) 00:10:14.045 12.041 - 12.089: 99.2292% ( 1) 00:10:14.045 12.231 - 12.326: 99.2366% ( 1) 00:10:14.045 12.326 - 12.421: 99.2441% ( 1) 00:10:14.045 12.421 - 12.516: 99.2516% ( 1) 00:10:14.045 12.610 - 12.705: 99.2591% ( 1) 00:10:14.045 12.705 - 12.800: 99.2666% ( 1) 00:10:14.045 12.800 - 12.895: 99.2741% ( 1) 00:10:14.045 13.179 - 13.274: 99.2890% ( 2) 00:10:14.045 13.274 - 13.369: 99.2965% ( 1) 00:10:14.045 13.369 - 13.464: 99.3040% ( 1) 00:10:14.045 13.464 - 13.559: 99.3190% ( 2) 00:10:14.045 13.653 - 13.748: 99.3264% ( 1) 00:10:14.045 13.938 - 14.033: 99.3414% ( 2) 00:10:14.045 17.256 - 17.351: 99.3489% ( 1) 00:10:14.045 17.351 - 17.446: 99.3639% ( 2) 00:10:14.045 17.446 - 17.541: 99.3863% ( 3) 00:10:14.045 17.541 - 17.636: 99.4163% ( 4) 00:10:14.045 17.636 - 17.730: 99.4387% ( 3) 00:10:14.045 17.730 - 17.825: 99.4462% ( 1) 00:10:14.045 17.825 - 17.920: 99.4686% ( 3) 00:10:14.045 17.920 - 18.015: 99.5061% ( 5) 00:10:14.045 18.015 - 18.110: 99.5659% ( 8) 00:10:14.045 18.110 - 18.204: 99.6034% ( 5) 00:10:14.045 18.204 - 18.299: 99.6183% ( 2) 00:10:14.045 18.299 - 18.394: 99.6782% ( 8) 00:10:14.045 18.394 - 18.489: 99.7231% ( 6) 00:10:14.045 18.489 - 18.584: 99.7455% ( 3) 00:10:14.045 18.584 - 18.679: 99.7830% ( 5) 00:10:14.045 18.679 - 18.773: 99.8129% ( 4) 00:10:14.045 18.773 - 18.868: 99.8578% ( 6) 00:10:14.045 18.868 - 18.963: 99.8653% ( 1) 00:10:14.045 18.963 - 19.058: 99.8803% ( 2) 00:10:14.045 19.058 - 19.153: 99.8877% ( 1) 00:10:14.045 19.342 - 19.437: 99.8952% ( 1) 00:10:14.045 19.532 - 19.627: 99.9027% ( 1) 00:10:14.045 19.816 - 19.911: 99.9102% ( 1) 00:10:14.045 20.575 - 20.670: 99.9177% ( 1) 00:10:14.045 22.661 - 22.756: 99.9252% ( 1) 00:10:14.045 24.178 - 24.273: 99.9401% ( 2) 00:10:14.045 3980.705 - 4004.978: 99.9701% ( 4) 00:10:14.045 4004.978 - 4029.250: 100.0000% ( 4) 00:10:14.045 00:10:14.045 Complete histogram 00:10:14.045 ================== 00:10:14.045 Range in us Cumulative Count 00:10:14.045 2.062 - 2.074: 0.2021% ( 27) 00:10:14.045 2.074 - 2.086: 19.2860% ( 2550) 00:10:14.045 2.086 - 2.098: 44.9633% ( 3431) 00:10:14.045 2.098 - 2.110: 50.9505% ( 800) 00:10:14.045 2.110 - 2.121: 58.8984% ( 1062) 00:10:14.045 2.121 - 2.133: 62.7900% ( 520) 00:10:14.045 2.133 - 2.145: 65.1998% ( 322) 00:10:14.045 2.145 - 2.157: 74.6221% ( 1259) 00:10:14.045 2.157 - 2.169: 82.3754% ( 1036) 00:10:14.045 2.169 - 2.181: 84.6131% ( 299) 00:10:14.045 2.181 - 2.193: 87.4570% ( 380) 00:10:14.045 2.193 - 2.204: 89.6198% ( 289) 00:10:14.045 2.204 - 2.216: 90.6376% ( 136) 00:10:14.045 2.216 - 2.228: 91.5058% ( 116) 00:10:14.045 2.228 - 2.240: 92.4413% ( 125) 00:10:14.045 2.240 - 2.252: 94.2598% ( 243) 00:10:14.045 2.252 - 2.264: 95.1355% ( 117) 00:10:14.045 2.264 - 2.276: 95.4423% ( 41) 00:10:14.045 2.276 - 2.287: 95.5845% ( 19) 00:10:14.045 2.287 - 2.299: 95.6968% ( 15) 00:10:14.045 2.299 - 2.311: 95.8165% ( 16) 00:10:14.045 2.311 - 2.323: 96.0560% ( 32) 00:10:14.045 2.323 - 2.335: 96.1982% ( 19) 00:10:14.045 2.335 - 2.347: 96.2356% ( 5) 00:10:14.045 2.347 - 2.359: 96.2431% ( 1) 00:10:14.045 2.359 - 2.370: 96.3029% ( 8) 00:10:14.045 2.370 - 2.382: 96.3703% ( 9) 00:10:14.045 2.382 - 2.394: 96.4826% ( 15) 00:10:14.045 2.394 - 2.406: 96.7595% ( 37) 00:10:14.045 2.406 - 2.418: 96.9915% ( 31) 00:10:14.045 2.418 - 2.430: 97.3806% ( 52) 00:10:14.045 2.430 - 2.441: 97.6950% ( 42) 00:10:14.045 2.441 - 2.453: 97.9195% ( 30) 00:10:14.045 2.453 - 2.465: 98.0467% ( 17) 00:10:14.045 2.465 - 2.477: 98.2263% ( 24) 00:10:14.045 2.477 - 2.489: 98.3086% ( 11) 00:10:14.045 2.489 - 2.501: 98.3984% ( 12) 00:10:14.045 2.501 - 2.513: 98.4658% ( 9) 00:10:14.045 2.513 - 2.524: 98.4957% ( 4) 00:10:14.045 2.524 - 2.536: 98.5182% ( 3) 00:10:14.045 2.536 - 2.548: 98.5257% ( 1) 00:10:14.045 2.560 - 2.572: 98.5556% ( 4) 00:10:14.045 2.572 - 2.584: 98.5631% ( 1) 00:10:14.045 2.584 - 2.596: 98.5781% ( 2) 00:10:14.045 2.596 - 2.607: 98.5855% ( 1) 00:10:14.045 2.619 - 2.631: 98.5930% ( 1) 00:10:14.045 2.643 - 2.655: 98.6005% ( 1) 00:10:14.045 2.655 - 2.667: 98.6080% ( 1) 00:10:14.045 2.690 - 2.702: 98.6155% ( 1) 00:10:14.045 2.726 - 2.738: 98.6230% ( 1) 00:10:14.045 2.750 - 2.761: 98.6379% ( 2) 00:10:14.045 2.773 - 2.785: 98.6454% ( 1) 00:10:14.045 2.844 - 2.856: 98.6529% ( 1) 00:10:14.045 3.319 - 3.342: 98.6604% ( 1) 00:10:14.045 3.390 - 3.413: 98.6679% ( 1) 00:10:14.045 3.413 - 3.437: 98.6753% ( 1) 00:10:14.045 3.437 - 3.461: 98.6903% ( 2) 00:10:14.045 3.461 - 3.484: 98.6978% ( 1) 00:10:14.045 3.508 - 3.532: 98.7128% ( 2) 00:10:14.045 3.532 - 3.556: 98.7203% ( 1) 00:10:14.045 3.556 - 3.579: 98.7277% ( 1) 00:10:14.045 3.603 - 3.627: 98.7352% ( 1) 00:10:14.045 3.721 - 3.745: 98.7427% ( 1) 00:10:14.045 3.745 - 3.769: 98.7502% ( 1) 00:10:14.045 3.864 - 3.887: 98.7577% ( 1) 00:10:14.045 3.911 - 3.935: 98.7726% ( 2) 00:10:14.045 3.935 - 3.959: 98.7876% ( 2) 00:10:14.045 3.982 - 4.006: 98.7951% ( 1) 00:10:14.045 4.006 - 4.030: 98.8101% ( 2) 00:10:14.045 4.148 - 4.172: 98.8175% ( 1) 00:10:14.045 5.333 - 5.357: 98.8250% ( 1) 00:10:14.045 5.404 - 5.428: 98.8325% ( 1) 00:10:14.045 5.641 - 5.665: 98.8400% ( 1) 00:10:14.045 6.400 - 6.447: 98.8550% ( 2) 00:10:14.045 6.542 - 6.590: 98.8624% ( 1) 00:10:14.045 6.684 - 6.732: 98.8774% ( 2) 00:10:14.045 6.827 - 6.874: 98.8849% ( 1) 00:10:14.045 6.921 - 6.969: 98.8924% ( 1) 00:10:14.045 6.969 - 7.016: 98.9073% ( 2) 00:10:14.045 7.159 - 7.206: 98.9223% ( 2) 00:10:14.045 7.490 - 7.538: 98.9298% ( 1) 00:10:14.045 7.633 - 7.680: 98.9373% ( 1) 00:10:14.045 7.822 - 7.870: 98.9448% ( 1) 00:10:14.045 7.870 - 7.917: 98.9523% ( 1) 00:10:14.045 7.917 - 7.964: 98.9672% ( 2) 00:10:14.045 8.154 - 8.201: 98.9747% ( 1) 00:10:14.045 8.913 - 8.960: 98.9822% ( 1) 00:10:14.045 15.739 - 15.834: 98.9897% ( 1) 00:10:14.045 15.834 - 15.929: 9[2024-07-12 16:59:13.569312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.045 9.0046% ( 2) 00:10:14.045 15.929 - 16.024: 99.0121% ( 1) 00:10:14.045 16.024 - 16.119: 99.0196% ( 1) 00:10:14.045 16.119 - 16.213: 99.0346% ( 2) 00:10:14.045 16.213 - 16.308: 99.0645% ( 4) 00:10:14.045 16.308 - 16.403: 99.0870% ( 3) 00:10:14.045 16.403 - 16.498: 99.0944% ( 1) 00:10:14.045 16.498 - 16.593: 99.1169% ( 3) 00:10:14.045 16.593 - 16.687: 99.1394% ( 3) 00:10:14.045 16.687 - 16.782: 99.1618% ( 3) 00:10:14.045 16.782 - 16.877: 99.1992% ( 5) 00:10:14.045 16.972 - 17.067: 99.2217% ( 3) 00:10:14.045 17.067 - 17.161: 99.2441% ( 3) 00:10:14.045 17.161 - 17.256: 99.2591% ( 2) 00:10:14.045 17.256 - 17.351: 99.2741% ( 2) 00:10:14.045 17.351 - 17.446: 99.3040% ( 4) 00:10:14.045 17.446 - 17.541: 99.3115% ( 1) 00:10:14.045 17.541 - 17.636: 99.3414% ( 4) 00:10:14.045 17.636 - 17.730: 99.3489% ( 1) 00:10:14.045 17.730 - 17.825: 99.3639% ( 2) 00:10:14.045 17.825 - 17.920: 99.3788% ( 2) 00:10:14.046 18.015 - 18.110: 99.3938% ( 2) 00:10:14.046 18.110 - 18.204: 99.4013% ( 1) 00:10:14.046 18.299 - 18.394: 99.4088% ( 1) 00:10:14.046 18.394 - 18.489: 99.4237% ( 2) 00:10:14.046 18.489 - 18.584: 99.4312% ( 1) 00:10:14.046 22.566 - 22.661: 99.4387% ( 1) 00:10:14.046 3980.705 - 4004.978: 99.8279% ( 52) 00:10:14.046 4004.978 - 4029.250: 100.0000% ( 23) 00:10:14.046 00:10:14.046 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:14.046 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:14.046 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:14.046 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:14.046 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.304 [ 00:10:14.304 { 00:10:14.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.304 "subtype": "Discovery", 00:10:14.304 "listen_addresses": [], 00:10:14.304 "allow_any_host": true, 00:10:14.304 "hosts": [] 00:10:14.304 }, 00:10:14.304 { 00:10:14.304 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.304 "subtype": "NVMe", 00:10:14.304 "listen_addresses": [ 00:10:14.304 { 00:10:14.304 "trtype": "VFIOUSER", 00:10:14.304 "adrfam": "IPv4", 00:10:14.304 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.304 "trsvcid": "0" 00:10:14.304 } 00:10:14.304 ], 00:10:14.304 "allow_any_host": true, 00:10:14.304 "hosts": [], 00:10:14.304 "serial_number": "SPDK1", 00:10:14.304 "model_number": "SPDK bdev Controller", 00:10:14.304 "max_namespaces": 32, 00:10:14.304 "min_cntlid": 1, 00:10:14.304 "max_cntlid": 65519, 00:10:14.304 "namespaces": [ 00:10:14.304 { 00:10:14.304 "nsid": 1, 00:10:14.304 "bdev_name": "Malloc1", 00:10:14.304 "name": "Malloc1", 00:10:14.304 "nguid": "CC816DDCDB224336A4691A75559F2CDC", 00:10:14.304 "uuid": "cc816ddc-db22-4336-a469-1a75559f2cdc" 00:10:14.304 } 00:10:14.304 ] 00:10:14.304 }, 00:10:14.304 { 00:10:14.304 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.304 "subtype": "NVMe", 00:10:14.304 "listen_addresses": [ 00:10:14.304 { 00:10:14.304 "trtype": "VFIOUSER", 00:10:14.304 "adrfam": "IPv4", 00:10:14.304 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.304 "trsvcid": "0" 00:10:14.304 } 00:10:14.304 ], 00:10:14.304 "allow_any_host": true, 00:10:14.304 "hosts": [], 00:10:14.304 "serial_number": "SPDK2", 00:10:14.304 "model_number": "SPDK bdev Controller", 00:10:14.304 "max_namespaces": 32, 00:10:14.304 "min_cntlid": 1, 00:10:14.304 "max_cntlid": 65519, 00:10:14.304 "namespaces": [ 00:10:14.304 { 00:10:14.304 "nsid": 1, 00:10:14.304 "bdev_name": "Malloc2", 00:10:14.304 "name": "Malloc2", 00:10:14.304 "nguid": "CED82A1D8D3741F5922DE05342929E0B", 00:10:14.304 "uuid": "ced82a1d-8d37-41f5-922d-e05342929e0b" 00:10:14.304 } 00:10:14.304 ] 00:10:14.304 } 00:10:14.304 ] 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1069047 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.304 16:59:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:14.304 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.562 [2024-07-12 16:59:14.027236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.562 Malloc3 00:10:14.562 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:14.820 [2024-07-12 16:59:14.392899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.820 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.820 Asynchronous Event Request test 00:10:14.820 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.820 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.820 Registering asynchronous event callbacks... 00:10:14.820 Starting namespace attribute notice tests for all controllers... 00:10:14.820 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:14.820 aer_cb - Changed Namespace 00:10:14.820 Cleaning up... 00:10:15.078 [ 00:10:15.078 { 00:10:15.078 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.078 "subtype": "Discovery", 00:10:15.078 "listen_addresses": [], 00:10:15.078 "allow_any_host": true, 00:10:15.078 "hosts": [] 00:10:15.078 }, 00:10:15.078 { 00:10:15.078 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:15.078 "subtype": "NVMe", 00:10:15.078 "listen_addresses": [ 00:10:15.078 { 00:10:15.078 "trtype": "VFIOUSER", 00:10:15.078 "adrfam": "IPv4", 00:10:15.078 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:15.078 "trsvcid": "0" 00:10:15.078 } 00:10:15.078 ], 00:10:15.078 "allow_any_host": true, 00:10:15.078 "hosts": [], 00:10:15.078 "serial_number": "SPDK1", 00:10:15.078 "model_number": "SPDK bdev Controller", 00:10:15.078 "max_namespaces": 32, 00:10:15.078 "min_cntlid": 1, 00:10:15.078 "max_cntlid": 65519, 00:10:15.078 "namespaces": [ 00:10:15.078 { 00:10:15.078 "nsid": 1, 00:10:15.078 "bdev_name": "Malloc1", 00:10:15.078 "name": "Malloc1", 00:10:15.078 "nguid": "CC816DDCDB224336A4691A75559F2CDC", 00:10:15.078 "uuid": "cc816ddc-db22-4336-a469-1a75559f2cdc" 00:10:15.078 }, 00:10:15.078 { 00:10:15.078 "nsid": 2, 00:10:15.078 "bdev_name": "Malloc3", 00:10:15.078 "name": "Malloc3", 00:10:15.078 "nguid": "41D6706E986841899BC3720589457718", 00:10:15.078 "uuid": "41d6706e-9868-4189-9bc3-720589457718" 00:10:15.078 } 00:10:15.078 ] 00:10:15.078 }, 00:10:15.078 { 00:10:15.078 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:15.078 "subtype": "NVMe", 00:10:15.078 "listen_addresses": [ 00:10:15.078 { 00:10:15.078 "trtype": "VFIOUSER", 00:10:15.078 "adrfam": "IPv4", 00:10:15.078 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:15.078 "trsvcid": "0" 00:10:15.078 } 00:10:15.078 ], 00:10:15.078 "allow_any_host": true, 00:10:15.078 "hosts": [], 00:10:15.078 "serial_number": "SPDK2", 00:10:15.078 "model_number": "SPDK bdev Controller", 00:10:15.078 "max_namespaces": 32, 00:10:15.078 "min_cntlid": 1, 00:10:15.078 "max_cntlid": 65519, 00:10:15.078 "namespaces": [ 00:10:15.078 { 00:10:15.078 "nsid": 1, 00:10:15.078 "bdev_name": "Malloc2", 00:10:15.078 "name": "Malloc2", 00:10:15.078 "nguid": "CED82A1D8D3741F5922DE05342929E0B", 00:10:15.078 "uuid": "ced82a1d-8d37-41f5-922d-e05342929e0b" 00:10:15.078 } 00:10:15.078 ] 00:10:15.078 } 00:10:15.078 ] 00:10:15.078 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1069047 00:10:15.078 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:15.078 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:15.078 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:15.078 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:15.078 [2024-07-12 16:59:14.675163] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:10:15.078 [2024-07-12 16:59:14.675206] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069174 ] 00:10:15.078 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.078 [2024-07-12 16:59:14.707876] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:15.078 [2024-07-12 16:59:14.717079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.078 [2024-07-12 16:59:14.717107] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f149d1fa000 00:10:15.078 [2024-07-12 16:59:14.718068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.078 [2024-07-12 16:59:14.719087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.078 [2024-07-12 16:59:14.720078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.078 [2024-07-12 16:59:14.721078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.078 [2024-07-12 16:59:14.722083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.079 [2024-07-12 16:59:14.723099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.079 [2024-07-12 16:59:14.724118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.079 [2024-07-12 16:59:14.725113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.079 [2024-07-12 16:59:14.726123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.079 [2024-07-12 16:59:14.726143] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f149d1ef000 00:10:15.079 [2024-07-12 16:59:14.727256] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.079 [2024-07-12 16:59:14.742392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:15.079 [2024-07-12 16:59:14.742439] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:15.079 [2024-07-12 16:59:14.747538] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.079 [2024-07-12 16:59:14.747590] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:15.079 [2024-07-12 16:59:14.747677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:15.079 [2024-07-12 16:59:14.747708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:15.079 [2024-07-12 16:59:14.747734] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:15.079 [2024-07-12 16:59:14.748546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:15.079 [2024-07-12 16:59:14.748567] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:15.079 [2024-07-12 16:59:14.748579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:15.079 [2024-07-12 16:59:14.749557] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.079 [2024-07-12 16:59:14.749576] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:15.079 [2024-07-12 16:59:14.749591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.750566] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:15.079 [2024-07-12 16:59:14.750587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.751571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:15.079 [2024-07-12 16:59:14.751593] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:15.079 [2024-07-12 16:59:14.751601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.751613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.751729] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:15.079 [2024-07-12 16:59:14.751744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.751754] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:15.079 [2024-07-12 16:59:14.752582] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:15.079 [2024-07-12 16:59:14.753590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:15.079 [2024-07-12 16:59:14.754595] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.079 [2024-07-12 16:59:14.755592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.079 [2024-07-12 16:59:14.755663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:15.079 [2024-07-12 16:59:14.756615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:15.079 [2024-07-12 16:59:14.756636] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:15.079 [2024-07-12 16:59:14.756650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:15.079 [2024-07-12 16:59:14.756676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:15.079 [2024-07-12 16:59:14.756708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:15.079 [2024-07-12 16:59:14.756732] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.079 [2024-07-12 16:59:14.756749] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.079 [2024-07-12 16:59:14.756785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.079 [2024-07-12 16:59:14.764756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:15.079 [2024-07-12 16:59:14.764782] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:15.079 [2024-07-12 16:59:14.764796] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:15.079 [2024-07-12 16:59:14.764804] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:15.079 [2024-07-12 16:59:14.764812] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:15.079 [2024-07-12 16:59:14.764820] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:15.079 [2024-07-12 16:59:14.764828] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:15.079 [2024-07-12 16:59:14.764836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:15.079 [2024-07-12 16:59:14.764850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:15.079 [2024-07-12 16:59:14.764866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.772751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.772795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.338 [2024-07-12 16:59:14.772811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.338 [2024-07-12 16:59:14.772823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.338 [2024-07-12 16:59:14.772835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.338 [2024-07-12 16:59:14.772844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.772861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.772876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.780750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.780773] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:15.338 [2024-07-12 16:59:14.780783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.780796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.780807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.780821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.788751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.788824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.788840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.788855] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:15.338 [2024-07-12 16:59:14.788863] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:15.338 [2024-07-12 16:59:14.788873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.796752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.796776] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:15.338 [2024-07-12 16:59:14.796793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.796808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.796822] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.338 [2024-07-12 16:59:14.796831] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.338 [2024-07-12 16:59:14.796840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.804779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.804796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.804810] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.338 [2024-07-12 16:59:14.804819] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.338 [2024-07-12 16:59:14.804829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.812750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.812773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812851] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:15.338 [2024-07-12 16:59:14.812859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:15.338 [2024-07-12 16:59:14.812867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:15.338 [2024-07-12 16:59:14.812892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.820751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.820777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.826932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.826958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.835767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.835792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.843750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.843796] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:15.338 [2024-07-12 16:59:14.843808] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:15.338 [2024-07-12 16:59:14.843814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:15.338 [2024-07-12 16:59:14.843821] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:15.338 [2024-07-12 16:59:14.843830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:15.338 [2024-07-12 16:59:14.843842] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:15.338 [2024-07-12 16:59:14.843850] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:15.338 [2024-07-12 16:59:14.843858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.843869] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:15.338 [2024-07-12 16:59:14.843877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.338 [2024-07-12 16:59:14.843885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.843901] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:15.338 [2024-07-12 16:59:14.843909] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:15.338 [2024-07-12 16:59:14.843918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:15.338 [2024-07-12 16:59:14.851748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.851804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:15.338 [2024-07-12 16:59:14.851816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:15.338 ===================================================== 00:10:15.338 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.338 ===================================================== 00:10:15.338 Controller Capabilities/Features 00:10:15.338 ================================ 00:10:15.338 Vendor ID: 4e58 00:10:15.338 Subsystem Vendor ID: 4e58 00:10:15.338 Serial Number: SPDK2 00:10:15.338 Model Number: SPDK bdev Controller 00:10:15.338 Firmware Version: 24.09 00:10:15.338 Recommended Arb Burst: 6 00:10:15.338 IEEE OUI Identifier: 8d 6b 50 00:10:15.338 Multi-path I/O 00:10:15.338 May have multiple subsystem ports: Yes 00:10:15.338 May have multiple controllers: Yes 00:10:15.338 Associated with SR-IOV VF: No 00:10:15.338 Max Data Transfer Size: 131072 00:10:15.339 Max Number of Namespaces: 32 00:10:15.339 Max Number of I/O Queues: 127 00:10:15.339 NVMe Specification Version (VS): 1.3 00:10:15.339 NVMe Specification Version (Identify): 1.3 00:10:15.339 Maximum Queue Entries: 256 00:10:15.339 Contiguous Queues Required: Yes 00:10:15.339 Arbitration Mechanisms Supported 00:10:15.339 Weighted Round Robin: Not Supported 00:10:15.339 Vendor Specific: Not Supported 00:10:15.339 Reset Timeout: 15000 ms 00:10:15.339 Doorbell Stride: 4 bytes 00:10:15.339 NVM Subsystem Reset: Not Supported 00:10:15.339 Command Sets Supported 00:10:15.339 NVM Command Set: Supported 00:10:15.339 Boot Partition: Not Supported 00:10:15.339 Memory Page Size Minimum: 4096 bytes 00:10:15.339 Memory Page Size Maximum: 4096 bytes 00:10:15.339 Persistent Memory Region: Not Supported 00:10:15.339 Optional Asynchronous Events Supported 00:10:15.339 Namespace Attribute Notices: Supported 00:10:15.339 Firmware Activation Notices: Not Supported 00:10:15.339 ANA Change Notices: Not Supported 00:10:15.339 PLE Aggregate Log Change Notices: Not Supported 00:10:15.339 LBA Status Info Alert Notices: Not Supported 00:10:15.339 EGE Aggregate Log Change Notices: Not Supported 00:10:15.339 Normal NVM Subsystem Shutdown event: Not Supported 00:10:15.339 Zone Descriptor Change Notices: Not Supported 00:10:15.339 Discovery Log Change Notices: Not Supported 00:10:15.339 Controller Attributes 00:10:15.339 128-bit Host Identifier: Supported 00:10:15.339 Non-Operational Permissive Mode: Not Supported 00:10:15.339 NVM Sets: Not Supported 00:10:15.339 Read Recovery Levels: Not Supported 00:10:15.339 Endurance Groups: Not Supported 00:10:15.339 Predictable Latency Mode: Not Supported 00:10:15.339 Traffic Based Keep ALive: Not Supported 00:10:15.339 Namespace Granularity: Not Supported 00:10:15.339 SQ Associations: Not Supported 00:10:15.339 UUID List: Not Supported 00:10:15.339 Multi-Domain Subsystem: Not Supported 00:10:15.339 Fixed Capacity Management: Not Supported 00:10:15.339 Variable Capacity Management: Not Supported 00:10:15.339 Delete Endurance Group: Not Supported 00:10:15.339 Delete NVM Set: Not Supported 00:10:15.339 Extended LBA Formats Supported: Not Supported 00:10:15.339 Flexible Data Placement Supported: Not Supported 00:10:15.339 00:10:15.339 Controller Memory Buffer Support 00:10:15.339 ================================ 00:10:15.339 Supported: No 00:10:15.339 00:10:15.339 Persistent Memory Region Support 00:10:15.339 ================================ 00:10:15.339 Supported: No 00:10:15.339 00:10:15.339 Admin Command Set Attributes 00:10:15.339 ============================ 00:10:15.339 Security Send/Receive: Not Supported 00:10:15.339 Format NVM: Not Supported 00:10:15.339 Firmware Activate/Download: Not Supported 00:10:15.339 Namespace Management: Not Supported 00:10:15.339 Device Self-Test: Not Supported 00:10:15.339 Directives: Not Supported 00:10:15.339 NVMe-MI: Not Supported 00:10:15.339 Virtualization Management: Not Supported 00:10:15.339 Doorbell Buffer Config: Not Supported 00:10:15.339 Get LBA Status Capability: Not Supported 00:10:15.339 Command & Feature Lockdown Capability: Not Supported 00:10:15.339 Abort Command Limit: 4 00:10:15.339 Async Event Request Limit: 4 00:10:15.339 Number of Firmware Slots: N/A 00:10:15.339 Firmware Slot 1 Read-Only: N/A 00:10:15.339 Firmware Activation Without Reset: N/A 00:10:15.339 Multiple Update Detection Support: N/A 00:10:15.339 Firmware Update Granularity: No Information Provided 00:10:15.339 Per-Namespace SMART Log: No 00:10:15.339 Asymmetric Namespace Access Log Page: Not Supported 00:10:15.339 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:15.339 Command Effects Log Page: Supported 00:10:15.339 Get Log Page Extended Data: Supported 00:10:15.339 Telemetry Log Pages: Not Supported 00:10:15.339 Persistent Event Log Pages: Not Supported 00:10:15.339 Supported Log Pages Log Page: May Support 00:10:15.339 Commands Supported & Effects Log Page: Not Supported 00:10:15.339 Feature Identifiers & Effects Log Page:May Support 00:10:15.339 NVMe-MI Commands & Effects Log Page: May Support 00:10:15.339 Data Area 4 for Telemetry Log: Not Supported 00:10:15.339 Error Log Page Entries Supported: 128 00:10:15.339 Keep Alive: Supported 00:10:15.339 Keep Alive Granularity: 10000 ms 00:10:15.339 00:10:15.339 NVM Command Set Attributes 00:10:15.339 ========================== 00:10:15.339 Submission Queue Entry Size 00:10:15.339 Max: 64 00:10:15.339 Min: 64 00:10:15.339 Completion Queue Entry Size 00:10:15.339 Max: 16 00:10:15.339 Min: 16 00:10:15.339 Number of Namespaces: 32 00:10:15.339 Compare Command: Supported 00:10:15.339 Write Uncorrectable Command: Not Supported 00:10:15.339 Dataset Management Command: Supported 00:10:15.339 Write Zeroes Command: Supported 00:10:15.339 Set Features Save Field: Not Supported 00:10:15.339 Reservations: Not Supported 00:10:15.339 Timestamp: Not Supported 00:10:15.339 Copy: Supported 00:10:15.339 Volatile Write Cache: Present 00:10:15.339 Atomic Write Unit (Normal): 1 00:10:15.339 Atomic Write Unit (PFail): 1 00:10:15.339 Atomic Compare & Write Unit: 1 00:10:15.339 Fused Compare & Write: Supported 00:10:15.339 Scatter-Gather List 00:10:15.339 SGL Command Set: Supported (Dword aligned) 00:10:15.339 SGL Keyed: Not Supported 00:10:15.339 SGL Bit Bucket Descriptor: Not Supported 00:10:15.339 SGL Metadata Pointer: Not Supported 00:10:15.339 Oversized SGL: Not Supported 00:10:15.339 SGL Metadata Address: Not Supported 00:10:15.339 SGL Offset: Not Supported 00:10:15.339 Transport SGL Data Block: Not Supported 00:10:15.339 Replay Protected Memory Block: Not Supported 00:10:15.339 00:10:15.339 Firmware Slot Information 00:10:15.339 ========================= 00:10:15.339 Active slot: 1 00:10:15.339 Slot 1 Firmware Revision: 24.09 00:10:15.339 00:10:15.339 00:10:15.339 Commands Supported and Effects 00:10:15.339 ============================== 00:10:15.339 Admin Commands 00:10:15.339 -------------- 00:10:15.339 Get Log Page (02h): Supported 00:10:15.339 Identify (06h): Supported 00:10:15.339 Abort (08h): Supported 00:10:15.339 Set Features (09h): Supported 00:10:15.339 Get Features (0Ah): Supported 00:10:15.339 Asynchronous Event Request (0Ch): Supported 00:10:15.339 Keep Alive (18h): Supported 00:10:15.339 I/O Commands 00:10:15.339 ------------ 00:10:15.339 Flush (00h): Supported LBA-Change 00:10:15.339 Write (01h): Supported LBA-Change 00:10:15.339 Read (02h): Supported 00:10:15.339 Compare (05h): Supported 00:10:15.339 Write Zeroes (08h): Supported LBA-Change 00:10:15.339 Dataset Management (09h): Supported LBA-Change 00:10:15.339 Copy (19h): Supported LBA-Change 00:10:15.339 00:10:15.339 Error Log 00:10:15.339 ========= 00:10:15.339 00:10:15.339 Arbitration 00:10:15.339 =========== 00:10:15.339 Arbitration Burst: 1 00:10:15.339 00:10:15.339 Power Management 00:10:15.339 ================ 00:10:15.339 Number of Power States: 1 00:10:15.339 Current Power State: Power State #0 00:10:15.339 Power State #0: 00:10:15.339 Max Power: 0.00 W 00:10:15.339 Non-Operational State: Operational 00:10:15.339 Entry Latency: Not Reported 00:10:15.339 Exit Latency: Not Reported 00:10:15.339 Relative Read Throughput: 0 00:10:15.339 Relative Read Latency: 0 00:10:15.339 Relative Write Throughput: 0 00:10:15.339 Relative Write Latency: 0 00:10:15.339 Idle Power: Not Reported 00:10:15.339 Active Power: Not Reported 00:10:15.339 Non-Operational Permissive Mode: Not Supported 00:10:15.339 00:10:15.339 Health Information 00:10:15.339 ================== 00:10:15.339 Critical Warnings: 00:10:15.339 Available Spare Space: OK 00:10:15.339 Temperature: OK 00:10:15.339 Device Reliability: OK 00:10:15.339 Read Only: No 00:10:15.339 Volatile Memory Backup: OK 00:10:15.339 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:15.339 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:15.339 Available Spare: 0% 00:10:15.339 Available Sp[2024-07-12 16:59:14.851929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:15.339 [2024-07-12 16:59:14.859752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:15.339 [2024-07-12 16:59:14.859810] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:15.339 [2024-07-12 16:59:14.859828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.339 [2024-07-12 16:59:14.859839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.339 [2024-07-12 16:59:14.859849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.339 [2024-07-12 16:59:14.859860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.339 [2024-07-12 16:59:14.859944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.339 [2024-07-12 16:59:14.859966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:15.339 [2024-07-12 16:59:14.860943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.339 [2024-07-12 16:59:14.861029] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:15.340 [2024-07-12 16:59:14.861043] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:15.340 [2024-07-12 16:59:14.861954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:15.340 [2024-07-12 16:59:14.861980] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:15.340 [2024-07-12 16:59:14.862034] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:15.340 [2024-07-12 16:59:14.863221] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.340 are Threshold: 0% 00:10:15.340 Life Percentage Used: 0% 00:10:15.340 Data Units Read: 0 00:10:15.340 Data Units Written: 0 00:10:15.340 Host Read Commands: 0 00:10:15.340 Host Write Commands: 0 00:10:15.340 Controller Busy Time: 0 minutes 00:10:15.340 Power Cycles: 0 00:10:15.340 Power On Hours: 0 hours 00:10:15.340 Unsafe Shutdowns: 0 00:10:15.340 Unrecoverable Media Errors: 0 00:10:15.340 Lifetime Error Log Entries: 0 00:10:15.340 Warning Temperature Time: 0 minutes 00:10:15.340 Critical Temperature Time: 0 minutes 00:10:15.340 00:10:15.340 Number of Queues 00:10:15.340 ================ 00:10:15.340 Number of I/O Submission Queues: 127 00:10:15.340 Number of I/O Completion Queues: 127 00:10:15.340 00:10:15.340 Active Namespaces 00:10:15.340 ================= 00:10:15.340 Namespace ID:1 00:10:15.340 Error Recovery Timeout: Unlimited 00:10:15.340 Command Set Identifier: NVM (00h) 00:10:15.340 Deallocate: Supported 00:10:15.340 Deallocated/Unwritten Error: Not Supported 00:10:15.340 Deallocated Read Value: Unknown 00:10:15.340 Deallocate in Write Zeroes: Not Supported 00:10:15.340 Deallocated Guard Field: 0xFFFF 00:10:15.340 Flush: Supported 00:10:15.340 Reservation: Supported 00:10:15.340 Namespace Sharing Capabilities: Multiple Controllers 00:10:15.340 Size (in LBAs): 131072 (0GiB) 00:10:15.340 Capacity (in LBAs): 131072 (0GiB) 00:10:15.340 Utilization (in LBAs): 131072 (0GiB) 00:10:15.340 NGUID: CED82A1D8D3741F5922DE05342929E0B 00:10:15.340 UUID: ced82a1d-8d37-41f5-922d-e05342929e0b 00:10:15.340 Thin Provisioning: Not Supported 00:10:15.340 Per-NS Atomic Units: Yes 00:10:15.340 Atomic Boundary Size (Normal): 0 00:10:15.340 Atomic Boundary Size (PFail): 0 00:10:15.340 Atomic Boundary Offset: 0 00:10:15.340 Maximum Single Source Range Length: 65535 00:10:15.340 Maximum Copy Length: 65535 00:10:15.340 Maximum Source Range Count: 1 00:10:15.340 NGUID/EUI64 Never Reused: No 00:10:15.340 Namespace Write Protected: No 00:10:15.340 Number of LBA Formats: 1 00:10:15.340 Current LBA Format: LBA Format #00 00:10:15.340 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:15.340 00:10:15.340 16:59:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:15.340 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.598 [2024-07-12 16:59:15.091382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:20.866 Initializing NVMe Controllers 00:10:20.866 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.866 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:20.866 Initialization complete. Launching workers. 00:10:20.866 ======================================================== 00:10:20.866 Latency(us) 00:10:20.866 Device Information : IOPS MiB/s Average min max 00:10:20.866 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34635.54 135.30 3695.10 1175.23 7612.08 00:10:20.866 ======================================================== 00:10:20.866 Total : 34635.54 135.30 3695.10 1175.23 7612.08 00:10:20.866 00:10:20.866 [2024-07-12 16:59:20.197131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:20.866 16:59:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:20.867 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.867 [2024-07-12 16:59:20.443765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.143 Initializing NVMe Controllers 00:10:26.143 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:26.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:26.143 Initialization complete. Launching workers. 00:10:26.143 ======================================================== 00:10:26.143 Latency(us) 00:10:26.143 Device Information : IOPS MiB/s Average min max 00:10:26.143 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32529.73 127.07 3934.17 1190.82 7475.77 00:10:26.143 ======================================================== 00:10:26.143 Total : 32529.73 127.07 3934.17 1190.82 7475.77 00:10:26.143 00:10:26.143 [2024-07-12 16:59:25.467126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.143 16:59:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:26.144 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.144 [2024-07-12 16:59:25.677149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.418 [2024-07-12 16:59:30.799891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.418 Initializing NVMe Controllers 00:10:31.418 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:31.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:31.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:31.418 Initialization complete. Launching workers. 00:10:31.418 Starting thread on core 2 00:10:31.418 Starting thread on core 3 00:10:31.418 Starting thread on core 1 00:10:31.418 16:59:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:31.418 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.676 [2024-07-12 16:59:31.113235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.965 [2024-07-12 16:59:34.331710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.965 Initializing NVMe Controllers 00:10:34.965 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.965 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:34.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:34.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:34.965 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:34.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:34.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:34.965 Initialization complete. Launching workers. 00:10:34.965 Starting thread on core 1 with urgent priority queue 00:10:34.965 Starting thread on core 2 with urgent priority queue 00:10:34.965 Starting thread on core 3 with urgent priority queue 00:10:34.965 Starting thread on core 0 with urgent priority queue 00:10:34.965 SPDK bdev Controller (SPDK2 ) core 0: 3361.00 IO/s 29.75 secs/100000 ios 00:10:34.965 SPDK bdev Controller (SPDK2 ) core 1: 3274.33 IO/s 30.54 secs/100000 ios 00:10:34.965 SPDK bdev Controller (SPDK2 ) core 2: 3441.33 IO/s 29.06 secs/100000 ios 00:10:34.965 SPDK bdev Controller (SPDK2 ) core 3: 2888.33 IO/s 34.62 secs/100000 ios 00:10:34.965 ======================================================== 00:10:34.965 00:10:34.965 16:59:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.965 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.965 [2024-07-12 16:59:34.623245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.965 Initializing NVMe Controllers 00:10:34.965 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.965 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.965 Namespace ID: 1 size: 0GB 00:10:34.965 Initialization complete. 00:10:34.965 INFO: using host memory buffer for IO 00:10:34.965 Hello world! 00:10:34.965 [2024-07-12 16:59:34.632308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.224 16:59:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:35.224 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.482 [2024-07-12 16:59:34.927580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.497 Initializing NVMe Controllers 00:10:36.497 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.497 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.497 Initialization complete. Launching workers. 00:10:36.497 submit (in ns) avg, min, max = 7832.0, 3521.1, 4018231.1 00:10:36.497 complete (in ns) avg, min, max = 27203.0, 2078.9, 4015675.6 00:10:36.497 00:10:36.497 Submit histogram 00:10:36.497 ================ 00:10:36.497 Range in us Cumulative Count 00:10:36.497 3.508 - 3.532: 0.0226% ( 3) 00:10:36.497 3.532 - 3.556: 0.3238% ( 40) 00:10:36.497 3.556 - 3.579: 1.3631% ( 138) 00:10:36.497 3.579 - 3.603: 3.9160% ( 339) 00:10:36.497 3.603 - 3.627: 8.0202% ( 545) 00:10:36.497 3.627 - 3.650: 16.0554% ( 1067) 00:10:36.497 3.650 - 3.674: 24.6931% ( 1147) 00:10:36.497 3.674 - 3.698: 34.3249% ( 1279) 00:10:36.497 3.698 - 3.721: 42.3451% ( 1065) 00:10:36.497 3.721 - 3.745: 49.2959% ( 923) 00:10:36.497 3.745 - 3.769: 54.8385% ( 736) 00:10:36.497 3.769 - 3.793: 59.3945% ( 605) 00:10:36.497 3.793 - 3.816: 63.3707% ( 528) 00:10:36.497 3.816 - 3.840: 66.9252% ( 472) 00:10:36.497 3.840 - 3.864: 70.3818% ( 459) 00:10:36.497 3.864 - 3.887: 74.1321% ( 498) 00:10:36.497 3.887 - 3.911: 77.7920% ( 486) 00:10:36.497 3.911 - 3.935: 81.7230% ( 522) 00:10:36.497 3.935 - 3.959: 84.9085% ( 423) 00:10:36.497 3.959 - 3.982: 87.0623% ( 286) 00:10:36.497 3.982 - 4.006: 88.8621% ( 239) 00:10:36.497 4.006 - 4.030: 90.4887% ( 216) 00:10:36.497 4.030 - 4.053: 91.7765% ( 171) 00:10:36.497 4.053 - 4.077: 92.8308% ( 140) 00:10:36.497 4.077 - 4.101: 93.7194% ( 118) 00:10:36.497 4.101 - 4.124: 94.6005% ( 117) 00:10:36.497 4.124 - 4.148: 95.2030% ( 80) 00:10:36.497 4.148 - 4.172: 95.7150% ( 68) 00:10:36.497 4.172 - 4.196: 96.1217% ( 54) 00:10:36.497 4.196 - 4.219: 96.4305% ( 41) 00:10:36.497 4.219 - 4.243: 96.6488% ( 29) 00:10:36.497 4.243 - 4.267: 96.7317% ( 11) 00:10:36.497 4.267 - 4.290: 96.8070% ( 10) 00:10:36.497 4.290 - 4.314: 96.9425% ( 18) 00:10:36.497 4.314 - 4.338: 97.0555% ( 15) 00:10:36.497 4.338 - 4.361: 97.1233% ( 9) 00:10:36.498 4.361 - 4.385: 97.1534% ( 4) 00:10:36.498 4.385 - 4.409: 97.2061% ( 7) 00:10:36.498 4.409 - 4.433: 97.2287% ( 3) 00:10:36.498 4.433 - 4.456: 97.2362% ( 1) 00:10:36.498 4.456 - 4.480: 97.3040% ( 9) 00:10:36.498 4.480 - 4.504: 97.3567% ( 7) 00:10:36.498 4.504 - 4.527: 97.3718% ( 2) 00:10:36.498 4.527 - 4.551: 97.3793% ( 1) 00:10:36.498 4.551 - 4.575: 97.3869% ( 1) 00:10:36.498 4.575 - 4.599: 97.3944% ( 1) 00:10:36.498 4.622 - 4.646: 97.4170% ( 3) 00:10:36.498 4.646 - 4.670: 97.4320% ( 2) 00:10:36.498 4.670 - 4.693: 97.4471% ( 2) 00:10:36.498 4.717 - 4.741: 97.4697% ( 3) 00:10:36.498 4.741 - 4.764: 97.4848% ( 2) 00:10:36.498 4.764 - 4.788: 97.5149% ( 4) 00:10:36.498 4.788 - 4.812: 97.5450% ( 4) 00:10:36.498 4.812 - 4.836: 97.5751% ( 4) 00:10:36.498 4.836 - 4.859: 97.6203% ( 6) 00:10:36.498 4.859 - 4.883: 97.6504% ( 4) 00:10:36.498 4.883 - 4.907: 97.6730% ( 3) 00:10:36.498 4.907 - 4.930: 97.7182% ( 6) 00:10:36.498 4.930 - 4.954: 97.7559% ( 5) 00:10:36.498 4.954 - 4.978: 97.8387% ( 11) 00:10:36.498 4.978 - 5.001: 97.8688% ( 4) 00:10:36.498 5.001 - 5.025: 97.9065% ( 5) 00:10:36.498 5.025 - 5.049: 97.9517% ( 6) 00:10:36.498 5.049 - 5.073: 97.9818% ( 4) 00:10:36.498 5.073 - 5.096: 98.0044% ( 3) 00:10:36.498 5.096 - 5.120: 98.0119% ( 1) 00:10:36.498 5.120 - 5.144: 98.0721% ( 8) 00:10:36.498 5.144 - 5.167: 98.1023% ( 4) 00:10:36.498 5.167 - 5.191: 98.1173% ( 2) 00:10:36.498 5.191 - 5.215: 98.1475% ( 4) 00:10:36.498 5.215 - 5.239: 98.1776% ( 4) 00:10:36.498 5.286 - 5.310: 98.2152% ( 5) 00:10:36.498 5.310 - 5.333: 98.2228% ( 1) 00:10:36.498 5.357 - 5.381: 98.2303% ( 1) 00:10:36.498 5.381 - 5.404: 98.2378% ( 1) 00:10:36.498 5.404 - 5.428: 98.2529% ( 2) 00:10:36.498 5.452 - 5.476: 98.2604% ( 1) 00:10:36.498 5.570 - 5.594: 98.2679% ( 1) 00:10:36.498 5.618 - 5.641: 98.2755% ( 1) 00:10:36.498 5.689 - 5.713: 98.2830% ( 1) 00:10:36.498 6.021 - 6.044: 98.2905% ( 1) 00:10:36.498 6.044 - 6.068: 98.2981% ( 1) 00:10:36.498 6.258 - 6.305: 98.3056% ( 1) 00:10:36.498 6.542 - 6.590: 98.3131% ( 1) 00:10:36.498 6.732 - 6.779: 98.3207% ( 1) 00:10:36.498 6.874 - 6.921: 98.3282% ( 1) 00:10:36.498 6.921 - 6.969: 98.3432% ( 2) 00:10:36.498 7.016 - 7.064: 98.3508% ( 1) 00:10:36.498 7.159 - 7.206: 98.3583% ( 1) 00:10:36.498 7.206 - 7.253: 98.3658% ( 1) 00:10:36.498 7.396 - 7.443: 98.3884% ( 3) 00:10:36.498 7.538 - 7.585: 98.4186% ( 4) 00:10:36.498 7.633 - 7.680: 98.4261% ( 1) 00:10:36.498 7.727 - 7.775: 98.4336% ( 1) 00:10:36.498 7.822 - 7.870: 98.4487% ( 2) 00:10:36.498 8.059 - 8.107: 98.4562% ( 1) 00:10:36.498 8.201 - 8.249: 98.4637% ( 1) 00:10:36.498 8.249 - 8.296: 98.4863% ( 3) 00:10:36.498 8.296 - 8.344: 98.4939% ( 1) 00:10:36.498 8.344 - 8.391: 98.5014% ( 1) 00:10:36.498 8.391 - 8.439: 98.5240% ( 3) 00:10:36.498 8.439 - 8.486: 98.5390% ( 2) 00:10:36.498 8.533 - 8.581: 98.5541% ( 2) 00:10:36.498 8.628 - 8.676: 98.5692% ( 2) 00:10:36.498 8.676 - 8.723: 98.5842% ( 2) 00:10:36.498 8.723 - 8.770: 98.5918% ( 1) 00:10:36.498 8.818 - 8.865: 98.5993% ( 1) 00:10:36.498 8.865 - 8.913: 98.6144% ( 2) 00:10:36.498 9.007 - 9.055: 98.6219% ( 1) 00:10:36.498 9.197 - 9.244: 98.6369% ( 2) 00:10:36.498 9.244 - 9.292: 98.6445% ( 1) 00:10:36.498 9.339 - 9.387: 98.6671% ( 3) 00:10:36.498 9.434 - 9.481: 98.6746% ( 1) 00:10:36.498 9.481 - 9.529: 98.6897% ( 2) 00:10:36.498 9.529 - 9.576: 98.6972% ( 1) 00:10:36.498 9.576 - 9.624: 98.7123% ( 2) 00:10:36.498 9.671 - 9.719: 98.7198% ( 1) 00:10:36.498 9.766 - 9.813: 98.7424% ( 3) 00:10:36.498 9.956 - 10.003: 98.7725% ( 4) 00:10:36.498 10.193 - 10.240: 98.7800% ( 1) 00:10:36.498 10.287 - 10.335: 98.7876% ( 1) 00:10:36.498 10.477 - 10.524: 98.7951% ( 1) 00:10:36.498 10.572 - 10.619: 98.8026% ( 1) 00:10:36.498 10.619 - 10.667: 98.8102% ( 1) 00:10:36.498 10.761 - 10.809: 98.8177% ( 1) 00:10:36.498 10.856 - 10.904: 98.8252% ( 1) 00:10:36.498 11.046 - 11.093: 98.8327% ( 1) 00:10:36.498 11.093 - 11.141: 98.8403% ( 1) 00:10:36.498 11.330 - 11.378: 98.8478% ( 1) 00:10:36.498 11.473 - 11.520: 98.8553% ( 1) 00:10:36.498 11.567 - 11.615: 98.8629% ( 1) 00:10:36.498 11.757 - 11.804: 98.8704% ( 1) 00:10:36.498 11.899 - 11.947: 98.8779% ( 1) 00:10:36.498 12.421 - 12.516: 98.8855% ( 1) 00:10:36.498 12.610 - 12.705: 98.8930% ( 1) 00:10:36.498 12.800 - 12.895: 98.9005% ( 1) 00:10:36.498 13.179 - 13.274: 98.9081% ( 1) 00:10:36.498 13.274 - 13.369: 98.9156% ( 1) 00:10:36.498 13.559 - 13.653: 98.9306% ( 2) 00:10:36.498 13.653 - 13.748: 98.9382% ( 1) 00:10:36.498 13.748 - 13.843: 98.9532% ( 2) 00:10:36.498 13.938 - 14.033: 98.9608% ( 1) 00:10:36.498 14.222 - 14.317: 98.9683% ( 1) 00:10:36.498 14.317 - 14.412: 98.9758% ( 1) 00:10:36.498 14.412 - 14.507: 98.9834% ( 1) 00:10:36.498 14.507 - 14.601: 98.9984% ( 2) 00:10:36.498 14.601 - 14.696: 99.0059% ( 1) 00:10:36.498 15.265 - 15.360: 99.0135% ( 1) 00:10:36.498 15.929 - 16.024: 99.0210% ( 1) 00:10:36.498 16.972 - 17.067: 99.0285% ( 1) 00:10:36.498 17.067 - 17.161: 99.0361% ( 1) 00:10:36.498 17.161 - 17.256: 99.0436% ( 1) 00:10:36.498 17.351 - 17.446: 99.0587% ( 2) 00:10:36.498 17.446 - 17.541: 99.0813% ( 3) 00:10:36.498 17.541 - 17.636: 99.1038% ( 3) 00:10:36.498 17.636 - 17.730: 99.1716% ( 9) 00:10:36.498 17.730 - 17.825: 99.1792% ( 1) 00:10:36.498 17.825 - 17.920: 99.2319% ( 7) 00:10:36.498 17.920 - 18.015: 99.2996% ( 9) 00:10:36.498 18.015 - 18.110: 99.3147% ( 2) 00:10:36.498 18.110 - 18.204: 99.3674% ( 7) 00:10:36.498 18.204 - 18.299: 99.4653% ( 13) 00:10:36.498 18.299 - 18.394: 99.5030% ( 5) 00:10:36.498 18.394 - 18.489: 99.6159% ( 15) 00:10:36.498 18.489 - 18.584: 99.6837% ( 9) 00:10:36.498 18.584 - 18.679: 99.7440% ( 8) 00:10:36.498 18.679 - 18.773: 99.7741% ( 4) 00:10:36.498 18.773 - 18.868: 99.8042% ( 4) 00:10:36.498 18.868 - 18.963: 99.8268% ( 3) 00:10:36.498 18.963 - 19.058: 99.8419% ( 2) 00:10:36.498 19.153 - 19.247: 99.8569% ( 2) 00:10:36.498 19.247 - 19.342: 99.8644% ( 1) 00:10:36.498 19.342 - 19.437: 99.8720% ( 1) 00:10:36.498 19.437 - 19.532: 99.8795% ( 1) 00:10:36.498 20.196 - 20.290: 99.8870% ( 1) 00:10:36.498 25.221 - 25.410: 99.8946% ( 1) 00:10:36.498 26.927 - 27.117: 99.9021% ( 1) 00:10:36.498 3252.527 - 3276.800: 99.9096% ( 1) 00:10:36.498 3980.705 - 4004.978: 99.9774% ( 9) 00:10:36.498 4004.978 - 4029.250: 100.0000% ( 3) 00:10:36.498 00:10:36.498 Complete histogram 00:10:36.498 ================== 00:10:36.498 Range in us Cumulative Count 00:10:36.498 2.074 - 2.086: 2.2969% ( 305) 00:10:36.498 2.086 - 2.098: 23.8572% ( 2863) 00:10:36.498 2.098 - 2.110: 37.1037% ( 1759) 00:10:36.498 2.110 - 2.121: 45.0185% ( 1051) 00:10:36.498 2.121 - 2.133: 57.5646% ( 1666) 00:10:36.498 2.133 - 2.145: 60.6823% ( 414) 00:10:36.498 2.145 - 2.157: 64.0560% ( 448) 00:10:36.498 2.157 - 2.169: 73.7405% ( 1286) 00:10:36.498 2.169 - 2.181: 77.2874% ( 471) 00:10:36.498 2.181 - 2.193: 81.0076% ( 494) 00:10:36.498 2.193 - 2.204: 84.9687% ( 526) 00:10:36.498 2.204 - 2.216: 86.0833% ( 148) 00:10:36.498 2.216 - 2.228: 87.0924% ( 134) 00:10:36.498 2.228 - 2.240: 89.3591% ( 301) 00:10:36.498 2.240 - 2.252: 91.7539% ( 318) 00:10:36.498 2.252 - 2.264: 93.2073% ( 193) 00:10:36.498 2.264 - 2.276: 94.4122% ( 160) 00:10:36.498 2.276 - 2.287: 94.7812% ( 49) 00:10:36.498 2.287 - 2.299: 94.9695% ( 25) 00:10:36.498 2.299 - 2.311: 95.1804% ( 28) 00:10:36.498 2.311 - 2.323: 95.6171% ( 58) 00:10:36.498 2.323 - 2.335: 95.8732% ( 34) 00:10:36.498 2.335 - 2.347: 95.9861% ( 15) 00:10:36.498 2.347 - 2.359: 96.0389% ( 7) 00:10:36.498 2.359 - 2.370: 96.1292% ( 12) 00:10:36.498 2.370 - 2.382: 96.3100% ( 24) 00:10:36.498 2.382 - 2.394: 96.5811% ( 36) 00:10:36.498 2.394 - 2.406: 96.9953% ( 55) 00:10:36.498 2.406 - 2.418: 97.2965% ( 40) 00:10:36.498 2.418 - 2.430: 97.6128% ( 42) 00:10:36.498 2.430 - 2.441: 97.8161% ( 27) 00:10:36.498 2.441 - 2.453: 97.9592% ( 19) 00:10:36.498 2.453 - 2.465: 98.0721% ( 15) 00:10:36.498 2.465 - 2.477: 98.2002% ( 17) 00:10:36.498 2.477 - 2.489: 98.2529% ( 7) 00:10:36.498 2.489 - 2.501: 98.2981% ( 6) 00:10:36.498 2.501 - 2.513: 98.3432% ( 6) 00:10:36.498 2.513 - 2.524: 98.3960% ( 7) 00:10:36.498 2.524 - 2.536: 98.4110% ( 2) 00:10:36.498 2.536 - 2.548: 98.4336% ( 3) 00:10:36.498 2.548 - 2.560: 98.4487% ( 2) 00:10:36.498 2.560 - 2.572: 98.4562% ( 1) 00:10:36.498 2.572 - 2.584: 98.4713% ( 2) 00:10:36.498 2.619 - 2.631: 98.4788% ( 1) 00:10:36.498 2.643 - 2.655: 98.4863% ( 1) 00:10:36.498 2.655 - 2.667: 98.4939% ( 1) 00:10:36.498 2.690 - 2.702: 98.5014% ( 1) 00:10:36.498 2.726 - 2.738: 98.5089% ( 1) 00:10:36.499 2.738 - 2.750: 98.5240% ( 2) 00:10:36.499 2.750 - 2.761: 98.5315% ( 1) 00:10:36.499 3.461 - 3.484: 98.5390% ( 1) 00:10:36.499 3.508 - 3.532: 98.5466% ( 1) 00:10:36.499 3.532 - 3.556: 98.5541% ( 1) 00:10:36.499 3.556 - 3.579: 98.5616% ( 1) 00:10:36.499 3.579 - 3.603: 98.5842% ( 3) 00:10:36.499 3.603 - 3.627: 98.5993% ( 2) 00:10:36.499 3.627 - 3.650: 98.6219% ( 3) 00:10:36.499 3.650 - 3.674: 98.6294% ( 1) 00:10:36.499 3.674 - 3.698: 98.6369% ( 1) 00:10:36.499 3.698 - 3.721: 98.6445% ( 1) 00:10:36.499 3.769 - 3.793: 98.6671% ( 3) 00:10:36.499 3.793 - 3.816: 98.6821% ( 2) 00:10:36.499 3.816 - 3.840: 98.7047% ( 3) 00:10:36.499 3.840 - 3.864: 98.7123% ( 1) 00:10:36.499 3.887 - 3.911: 98.7198% ( 1) 00:10:36.499 3.982 - 4.006: 98.7273% ( 1) 00:10:36.499 4.006 - 4.030: 98.7348% ( 1) 00:10:36.499 4.030 - 4.053: 98.7424% ( 1) 00:10:36.499 4.219 - 4.243: 98.7574% ( 2) 00:10:36.499 6.305 - 6.353: 98.7725% ( 2) 00:10:36.499 6.495 - 6.542: 98.7800% ( 1) 00:10:36.499 7.206 - 7.253: 9[2024-07-12 16:59:36.029543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.499 8.7876% ( 1) 00:10:36.499 7.870 - 7.917: 98.7951% ( 1) 00:10:36.499 8.107 - 8.154: 98.8102% ( 2) 00:10:36.499 8.154 - 8.201: 98.8177% ( 1) 00:10:36.499 8.344 - 8.391: 98.8252% ( 1) 00:10:36.499 8.676 - 8.723: 98.8327% ( 1) 00:10:36.499 9.197 - 9.244: 98.8403% ( 1) 00:10:36.499 9.387 - 9.434: 98.8478% ( 1) 00:10:36.499 9.529 - 9.576: 98.8553% ( 1) 00:10:36.499 11.188 - 11.236: 98.8629% ( 1) 00:10:36.499 12.326 - 12.421: 98.8704% ( 1) 00:10:36.499 15.455 - 15.550: 98.8779% ( 1) 00:10:36.499 15.739 - 15.834: 98.8855% ( 1) 00:10:36.499 15.834 - 15.929: 98.8930% ( 1) 00:10:36.499 15.929 - 16.024: 98.9306% ( 5) 00:10:36.499 16.024 - 16.119: 98.9457% ( 2) 00:10:36.499 16.119 - 16.213: 98.9834% ( 5) 00:10:36.499 16.213 - 16.308: 98.9984% ( 2) 00:10:36.499 16.308 - 16.403: 99.0436% ( 6) 00:10:36.499 16.403 - 16.498: 99.0737% ( 4) 00:10:36.499 16.498 - 16.593: 99.1189% ( 6) 00:10:36.499 16.593 - 16.687: 99.1415% ( 3) 00:10:36.499 16.687 - 16.782: 99.1792% ( 5) 00:10:36.499 16.782 - 16.877: 99.2319% ( 7) 00:10:36.499 16.877 - 16.972: 99.2695% ( 5) 00:10:36.499 16.972 - 17.067: 99.2846% ( 2) 00:10:36.499 17.067 - 17.161: 99.3072% ( 3) 00:10:36.499 17.161 - 17.256: 99.3147% ( 1) 00:10:36.499 17.256 - 17.351: 99.3222% ( 1) 00:10:36.499 17.351 - 17.446: 99.3298% ( 1) 00:10:36.499 17.636 - 17.730: 99.3448% ( 2) 00:10:36.499 17.920 - 18.015: 99.3524% ( 1) 00:10:36.499 18.489 - 18.584: 99.3599% ( 1) 00:10:36.499 21.144 - 21.239: 99.3674% ( 1) 00:10:36.499 22.566 - 22.661: 99.3750% ( 1) 00:10:36.499 3568.071 - 3592.344: 99.3825% ( 1) 00:10:36.499 3980.705 - 4004.978: 99.9172% ( 71) 00:10:36.499 4004.978 - 4029.250: 100.0000% ( 11) 00:10:36.499 00:10:36.499 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:36.499 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:36.499 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:36.499 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:36.499 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:36.756 [ 00:10:36.756 { 00:10:36.756 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:36.756 "subtype": "Discovery", 00:10:36.756 "listen_addresses": [], 00:10:36.756 "allow_any_host": true, 00:10:36.756 "hosts": [] 00:10:36.756 }, 00:10:36.756 { 00:10:36.756 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:36.756 "subtype": "NVMe", 00:10:36.756 "listen_addresses": [ 00:10:36.756 { 00:10:36.756 "trtype": "VFIOUSER", 00:10:36.756 "adrfam": "IPv4", 00:10:36.756 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:36.756 "trsvcid": "0" 00:10:36.756 } 00:10:36.756 ], 00:10:36.756 "allow_any_host": true, 00:10:36.756 "hosts": [], 00:10:36.756 "serial_number": "SPDK1", 00:10:36.756 "model_number": "SPDK bdev Controller", 00:10:36.756 "max_namespaces": 32, 00:10:36.756 "min_cntlid": 1, 00:10:36.756 "max_cntlid": 65519, 00:10:36.756 "namespaces": [ 00:10:36.756 { 00:10:36.756 "nsid": 1, 00:10:36.756 "bdev_name": "Malloc1", 00:10:36.756 "name": "Malloc1", 00:10:36.756 "nguid": "CC816DDCDB224336A4691A75559F2CDC", 00:10:36.756 "uuid": "cc816ddc-db22-4336-a469-1a75559f2cdc" 00:10:36.756 }, 00:10:36.756 { 00:10:36.756 "nsid": 2, 00:10:36.756 "bdev_name": "Malloc3", 00:10:36.756 "name": "Malloc3", 00:10:36.756 "nguid": "41D6706E986841899BC3720589457718", 00:10:36.756 "uuid": "41d6706e-9868-4189-9bc3-720589457718" 00:10:36.756 } 00:10:36.756 ] 00:10:36.756 }, 00:10:36.756 { 00:10:36.756 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:36.756 "subtype": "NVMe", 00:10:36.756 "listen_addresses": [ 00:10:36.756 { 00:10:36.756 "trtype": "VFIOUSER", 00:10:36.756 "adrfam": "IPv4", 00:10:36.756 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:36.756 "trsvcid": "0" 00:10:36.756 } 00:10:36.756 ], 00:10:36.756 "allow_any_host": true, 00:10:36.756 "hosts": [], 00:10:36.756 "serial_number": "SPDK2", 00:10:36.756 "model_number": "SPDK bdev Controller", 00:10:36.756 "max_namespaces": 32, 00:10:36.756 "min_cntlid": 1, 00:10:36.756 "max_cntlid": 65519, 00:10:36.756 "namespaces": [ 00:10:36.756 { 00:10:36.756 "nsid": 1, 00:10:36.756 "bdev_name": "Malloc2", 00:10:36.756 "name": "Malloc2", 00:10:36.756 "nguid": "CED82A1D8D3741F5922DE05342929E0B", 00:10:36.756 "uuid": "ced82a1d-8d37-41f5-922d-e05342929e0b" 00:10:36.756 } 00:10:36.756 ] 00:10:36.756 } 00:10:36.756 ] 00:10:36.756 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1071709 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:36.757 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:36.757 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.014 [2024-07-12 16:59:36.536212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:37.014 Malloc4 00:10:37.014 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:37.272 [2024-07-12 16:59:36.920022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:37.272 16:59:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:37.530 Asynchronous Event Request test 00:10:37.530 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.530 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.530 Registering asynchronous event callbacks... 00:10:37.530 Starting namespace attribute notice tests for all controllers... 00:10:37.530 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:37.530 aer_cb - Changed Namespace 00:10:37.530 Cleaning up... 00:10:37.530 [ 00:10:37.530 { 00:10:37.530 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.530 "subtype": "Discovery", 00:10:37.530 "listen_addresses": [], 00:10:37.530 "allow_any_host": true, 00:10:37.530 "hosts": [] 00:10:37.530 }, 00:10:37.530 { 00:10:37.530 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:37.530 "subtype": "NVMe", 00:10:37.530 "listen_addresses": [ 00:10:37.530 { 00:10:37.530 "trtype": "VFIOUSER", 00:10:37.530 "adrfam": "IPv4", 00:10:37.530 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:37.530 "trsvcid": "0" 00:10:37.530 } 00:10:37.530 ], 00:10:37.530 "allow_any_host": true, 00:10:37.530 "hosts": [], 00:10:37.530 "serial_number": "SPDK1", 00:10:37.530 "model_number": "SPDK bdev Controller", 00:10:37.530 "max_namespaces": 32, 00:10:37.530 "min_cntlid": 1, 00:10:37.530 "max_cntlid": 65519, 00:10:37.530 "namespaces": [ 00:10:37.530 { 00:10:37.530 "nsid": 1, 00:10:37.530 "bdev_name": "Malloc1", 00:10:37.530 "name": "Malloc1", 00:10:37.530 "nguid": "CC816DDCDB224336A4691A75559F2CDC", 00:10:37.530 "uuid": "cc816ddc-db22-4336-a469-1a75559f2cdc" 00:10:37.530 }, 00:10:37.530 { 00:10:37.530 "nsid": 2, 00:10:37.530 "bdev_name": "Malloc3", 00:10:37.530 "name": "Malloc3", 00:10:37.530 "nguid": "41D6706E986841899BC3720589457718", 00:10:37.530 "uuid": "41d6706e-9868-4189-9bc3-720589457718" 00:10:37.530 } 00:10:37.530 ] 00:10:37.530 }, 00:10:37.530 { 00:10:37.530 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:37.530 "subtype": "NVMe", 00:10:37.530 "listen_addresses": [ 00:10:37.530 { 00:10:37.530 "trtype": "VFIOUSER", 00:10:37.530 "adrfam": "IPv4", 00:10:37.530 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:37.530 "trsvcid": "0" 00:10:37.530 } 00:10:37.530 ], 00:10:37.530 "allow_any_host": true, 00:10:37.530 "hosts": [], 00:10:37.530 "serial_number": "SPDK2", 00:10:37.530 "model_number": "SPDK bdev Controller", 00:10:37.530 "max_namespaces": 32, 00:10:37.530 "min_cntlid": 1, 00:10:37.530 "max_cntlid": 65519, 00:10:37.530 "namespaces": [ 00:10:37.530 { 00:10:37.530 "nsid": 1, 00:10:37.530 "bdev_name": "Malloc2", 00:10:37.530 "name": "Malloc2", 00:10:37.530 "nguid": "CED82A1D8D3741F5922DE05342929E0B", 00:10:37.530 "uuid": "ced82a1d-8d37-41f5-922d-e05342929e0b" 00:10:37.530 }, 00:10:37.530 { 00:10:37.530 "nsid": 2, 00:10:37.530 "bdev_name": "Malloc4", 00:10:37.530 "name": "Malloc4", 00:10:37.530 "nguid": "CD749FCC62E24E778F1075C853796797", 00:10:37.530 "uuid": "cd749fcc-62e2-4e77-8f10-75c853796797" 00:10:37.530 } 00:10:37.530 ] 00:10:37.530 } 00:10:37.530 ] 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1071709 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1066093 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1066093 ']' 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1066093 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.530 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1066093 00:10:37.789 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.789 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.789 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1066093' 00:10:37.789 killing process with pid 1066093 00:10:37.789 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1066093 00:10:37.789 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1066093 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1071851 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1071851' 00:10:38.047 Process pid: 1071851 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1071851 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1071851 ']' 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.047 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:38.047 [2024-07-12 16:59:37.650534] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:38.047 [2024-07-12 16:59:37.651553] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:10:38.047 [2024-07-12 16:59:37.651620] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.047 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.047 [2024-07-12 16:59:37.709433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.307 [2024-07-12 16:59:37.813842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.307 [2024-07-12 16:59:37.813900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.307 [2024-07-12 16:59:37.813929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.307 [2024-07-12 16:59:37.813940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.307 [2024-07-12 16:59:37.813950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.307 [2024-07-12 16:59:37.814009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.307 [2024-07-12 16:59:37.814071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.307 [2024-07-12 16:59:37.814107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.307 [2024-07-12 16:59:37.814110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.307 [2024-07-12 16:59:37.915693] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:38.307 [2024-07-12 16:59:37.916235] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:38.307 [2024-07-12 16:59:37.916879] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:38.307 [2024-07-12 16:59:37.916994] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:38.307 [2024-07-12 16:59:37.917119] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:38.307 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.307 16:59:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:38.307 16:59:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:39.686 16:59:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:39.686 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:39.686 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:39.686 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.686 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:39.686 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:39.943 Malloc1 00:10:39.943 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.202 16:59:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.460 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:40.718 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.718 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:40.718 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:40.976 Malloc2 00:10:40.976 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:41.234 16:59:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.491 16:59:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1071851 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1071851 ']' 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1071851 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071851 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071851' 00:10:41.750 killing process with pid 1071851 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1071851 00:10:41.750 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1071851 00:10:42.009 16:59:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:42.009 16:59:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:42.009 00:10:42.009 real 0m53.172s 00:10:42.009 user 3m30.018s 00:10:42.009 sys 0m4.328s 00:10:42.009 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.009 16:59:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:42.009 ************************************ 00:10:42.009 END TEST nvmf_vfio_user 00:10:42.009 ************************************ 00:10:42.009 16:59:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:42.009 16:59:41 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.009 16:59:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.009 16:59:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.009 16:59:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.009 ************************************ 00:10:42.009 START TEST nvmf_vfio_user_nvme_compliance 00:10:42.009 ************************************ 00:10:42.009 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.268 * Looking for test storage... 00:10:42.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1072447 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1072447' 00:10:42.268 Process pid: 1072447 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1072447 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1072447 ']' 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.268 16:59:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.268 [2024-07-12 16:59:41.777060] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:10:42.268 [2024-07-12 16:59:41.777155] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.268 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.268 [2024-07-12 16:59:41.835298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.268 [2024-07-12 16:59:41.942236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.268 [2024-07-12 16:59:41.942300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.268 [2024-07-12 16:59:41.942323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.268 [2024-07-12 16:59:41.942340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.268 [2024-07-12 16:59:41.942350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.268 [2024-07-12 16:59:41.942428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.268 [2024-07-12 16:59:41.942540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.268 [2024-07-12 16:59:41.942544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.528 16:59:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.528 16:59:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:42.528 16:59:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 malloc0 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.464 16:59:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:43.722 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.722 00:10:43.722 00:10:43.722 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.722 http://cunit.sourceforge.net/ 00:10:43.722 00:10:43.722 00:10:43.722 Suite: nvme_compliance 00:10:43.722 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 16:59:43.302409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.722 [2024-07-12 16:59:43.303917] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:43.722 [2024-07-12 16:59:43.303944] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:43.722 [2024-07-12 16:59:43.303961] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:43.722 [2024-07-12 16:59:43.305431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.722 passed 00:10:43.722 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 16:59:43.393044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.722 [2024-07-12 16:59:43.396083] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.981 passed 00:10:43.981 Test: admin_identify_ns ...[2024-07-12 16:59:43.482414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.981 [2024-07-12 16:59:43.542757] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:43.981 [2024-07-12 16:59:43.550759] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:43.981 [2024-07-12 16:59:43.571883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.981 passed 00:10:43.981 Test: admin_get_features_mandatory_features ...[2024-07-12 16:59:43.652538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.981 [2024-07-12 16:59:43.657570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.241 passed 00:10:44.241 Test: admin_get_features_optional_features ...[2024-07-12 16:59:43.742141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.241 [2024-07-12 16:59:43.745169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.241 passed 00:10:44.241 Test: admin_set_features_number_of_queues ...[2024-07-12 16:59:43.828308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.241 [2024-07-12 16:59:43.932856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.501 passed 00:10:44.501 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 16:59:44.018426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.501 [2024-07-12 16:59:44.021447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.501 passed 00:10:44.501 Test: admin_get_log_page_with_lpo ...[2024-07-12 16:59:44.102560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.501 [2024-07-12 16:59:44.170757] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:44.501 [2024-07-12 16:59:44.183830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.760 passed 00:10:44.760 Test: fabric_property_get ...[2024-07-12 16:59:44.267703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.760 [2024-07-12 16:59:44.269031] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:44.760 [2024-07-12 16:59:44.270747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.760 passed 00:10:44.760 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 16:59:44.356323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.760 [2024-07-12 16:59:44.357626] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:44.760 [2024-07-12 16:59:44.359342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.760 passed 00:10:44.760 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 16:59:44.442531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.019 [2024-07-12 16:59:44.526775] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.019 [2024-07-12 16:59:44.542746] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.019 [2024-07-12 16:59:44.547859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.019 passed 00:10:45.019 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 16:59:44.631599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.019 [2024-07-12 16:59:44.632909] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:45.019 [2024-07-12 16:59:44.634620] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.019 passed 00:10:45.277 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 16:59:44.714824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.277 [2024-07-12 16:59:44.792762] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:45.278 [2024-07-12 16:59:44.816764] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.278 [2024-07-12 16:59:44.821845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.278 passed 00:10:45.278 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 16:59:44.905632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.278 [2024-07-12 16:59:44.906941] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:45.278 [2024-07-12 16:59:44.907000] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:45.278 [2024-07-12 16:59:44.908652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.278 passed 00:10:45.537 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 16:59:44.992157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.537 [2024-07-12 16:59:45.087747] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:45.537 [2024-07-12 16:59:45.095761] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:45.537 [2024-07-12 16:59:45.103752] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:45.537 [2024-07-12 16:59:45.111744] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:45.537 [2024-07-12 16:59:45.140865] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.537 passed 00:10:45.537 Test: admin_create_io_sq_verify_pc ...[2024-07-12 16:59:45.221473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.797 [2024-07-12 16:59:45.236764] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:45.797 [2024-07-12 16:59:45.253894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.797 passed 00:10:45.797 Test: admin_create_io_qp_max_qps ...[2024-07-12 16:59:45.341464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.173 [2024-07-12 16:59:46.453757] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:47.173 [2024-07-12 16:59:46.839838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.431 passed 00:10:47.431 Test: admin_create_io_sq_shared_cq ...[2024-07-12 16:59:46.923104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.431 [2024-07-12 16:59:47.054748] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:47.431 [2024-07-12 16:59:47.091833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.691 passed 00:10:47.691 00:10:47.691 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.691 suites 1 1 n/a 0 0 00:10:47.691 tests 18 18 18 0 0 00:10:47.691 asserts 360 360 360 0 n/a 00:10:47.691 00:10:47.691 Elapsed time = 1.570 seconds 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1072447 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1072447 ']' 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1072447 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072447 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072447' 00:10:47.691 killing process with pid 1072447 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1072447 00:10:47.691 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1072447 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:47.950 00:10:47.950 real 0m5.804s 00:10:47.950 user 0m16.255s 00:10:47.950 sys 0m0.555s 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:47.950 ************************************ 00:10:47.950 END TEST nvmf_vfio_user_nvme_compliance 00:10:47.950 ************************************ 00:10:47.950 16:59:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:47.950 16:59:47 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:47.950 16:59:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.950 16:59:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.950 16:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.950 ************************************ 00:10:47.950 START TEST nvmf_vfio_user_fuzz 00:10:47.950 ************************************ 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:47.950 * Looking for test storage... 00:10:47.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.950 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1073175 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1073175' 00:10:47.951 Process pid: 1073175 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1073175 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1073175 ']' 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.951 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.520 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.520 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:48.520 16:59:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 malloc0 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:49.456 16:59:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:21.514 Fuzzing completed. Shutting down the fuzz application 00:11:21.514 00:11:21.514 Dumping successful admin opcodes: 00:11:21.514 8, 9, 10, 24, 00:11:21.514 Dumping successful io opcodes: 00:11:21.514 0, 00:11:21.514 NS: 0x200003a1ef00 I/O qp, Total commands completed: 629265, total successful commands: 2441, random_seed: 4112567424 00:11:21.514 NS: 0x200003a1ef00 admin qp, Total commands completed: 80058, total successful commands: 631, random_seed: 3954192832 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1073175 ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1073175' 00:11:21.514 killing process with pid 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1073175 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:21.514 00:11:21.514 real 0m32.264s 00:11:21.514 user 0m30.318s 00:11:21.514 sys 0m28.561s 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.514 17:00:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.514 ************************************ 00:11:21.514 END TEST nvmf_vfio_user_fuzz 00:11:21.514 ************************************ 00:11:21.514 17:00:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:21.514 17:00:19 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.514 17:00:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.514 17:00:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.514 17:00:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.514 ************************************ 00:11:21.514 START TEST nvmf_host_management 00:11:21.514 ************************************ 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.514 * Looking for test storage... 00:11:21.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.514 17:00:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:22.450 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:22.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:22.450 Found net devices under 0000:84:00.0: cvl_0_0 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:22.450 Found net devices under 0000:84:00.1: cvl_0_1 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.450 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:11:22.708 00:11:22.708 --- 10.0.0.2 ping statistics --- 00:11:22.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.708 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:22.708 00:11:22.708 --- 10.0.0.1 ping statistics --- 00:11:22.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.708 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1079262 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1079262 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1079262 ']' 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.708 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.708 [2024-07-12 17:00:22.243661] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:22.708 [2024-07-12 17:00:22.243728] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.708 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.708 [2024-07-12 17:00:22.303403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.967 [2024-07-12 17:00:22.407165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.967 [2024-07-12 17:00:22.407219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.967 [2024-07-12 17:00:22.407243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.967 [2024-07-12 17:00:22.407254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.967 [2024-07-12 17:00:22.407263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.967 [2024-07-12 17:00:22.407343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.967 [2024-07-12 17:00:22.407404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.967 [2024-07-12 17:00:22.407473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.967 [2024-07-12 17:00:22.407476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 [2024-07-12 17:00:22.562356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 Malloc0 00:11:22.967 [2024-07-12 17:00:22.623358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1079308 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1079308 /var/tmp/bdevperf.sock 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1079308 ']' 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:22.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.967 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.967 { 00:11:22.967 "params": { 00:11:22.967 "name": "Nvme$subsystem", 00:11:22.967 "trtype": "$TEST_TRANSPORT", 00:11:22.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.967 "adrfam": "ipv4", 00:11:22.967 "trsvcid": "$NVMF_PORT", 00:11:22.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.967 "hdgst": ${hdgst:-false}, 00:11:22.967 "ddgst": ${ddgst:-false} 00:11:22.967 }, 00:11:22.967 "method": "bdev_nvme_attach_controller" 00:11:22.967 } 00:11:22.967 EOF 00:11:22.967 )") 00:11:23.225 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:23.225 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:23.225 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:23.225 17:00:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:23.225 "params": { 00:11:23.225 "name": "Nvme0", 00:11:23.225 "trtype": "tcp", 00:11:23.225 "traddr": "10.0.0.2", 00:11:23.225 "adrfam": "ipv4", 00:11:23.225 "trsvcid": "4420", 00:11:23.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:23.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:23.226 "hdgst": false, 00:11:23.226 "ddgst": false 00:11:23.226 }, 00:11:23.226 "method": "bdev_nvme_attach_controller" 00:11:23.226 }' 00:11:23.226 [2024-07-12 17:00:22.701839] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:23.226 [2024-07-12 17:00:22.701932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079308 ] 00:11:23.226 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.226 [2024-07-12 17:00:22.763718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.226 [2024-07-12 17:00:22.876035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.485 Running I/O for 10 seconds... 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.050 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.050 [2024-07-12 17:00:23.739359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.050 [2024-07-12 17:00:23.739416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.050 [2024-07-12 17:00:23.739444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.050 [2024-07-12 17:00:23.739469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.050 [2024-07-12 17:00:23.739486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.739981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.739997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.740979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.740995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.051 [2024-07-12 17:00:23.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.051 [2024-07-12 17:00:23.741601] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc62a10 was disconnected and freed. reset controller. 00:11:24.051 [2024-07-12 17:00:23.742781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.310 task offset: 3584 on job bdev=Nvme0n1 fails 00:11:24.310 00:11:24.310 Latency(us) 00:11:24.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.310 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:24.310 Job: Nvme0n1 ended in about 0.68 seconds with error 00:11:24.310 Verification LBA range: start 0x0 length 0x400 00:11:24.310 Nvme0n1 : 0.68 1499.47 93.72 93.72 0.00 39387.59 2852.03 34175.81 00:11:24.310 =================================================================================================================== 00:11:24.310 Total : 1499.47 93.72 93.72 0.00 39387.59 2852.03 34175.81 00:11:24.310 [2024-07-12 17:00:23.744680] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:24.310 [2024-07-12 17:00:23.744710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831540 (9): Bad file descriptor 00:11:24.310 [2024-07-12 17:00:23.746607] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:24.310 [2024-07-12 17:00:23.746716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:24.310 [2024-07-12 17:00:23.746764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.310 [2024-07-12 17:00:23.746788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:24.310 [2024-07-12 17:00:23.746804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:24.310 [2024-07-12 17:00:23.746817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:24.310 [2024-07-12 17:00:23.746830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x831540 00:11:24.310 [2024-07-12 17:00:23.746863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831540 (9): Bad file descriptor 00:11:24.310 [2024-07-12 17:00:23.746888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:11:24.310 [2024-07-12 17:00:23.746903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:11:24.310 [2024-07-12 17:00:23.746920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:11:24.310 [2024-07-12 17:00:23.746940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.310 17:00:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1079308 00:11:25.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1079308) - No such process 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:25.316 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:25.317 { 00:11:25.317 "params": { 00:11:25.317 "name": "Nvme$subsystem", 00:11:25.317 "trtype": "$TEST_TRANSPORT", 00:11:25.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.317 "adrfam": "ipv4", 00:11:25.317 "trsvcid": "$NVMF_PORT", 00:11:25.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.317 "hdgst": ${hdgst:-false}, 00:11:25.317 "ddgst": ${ddgst:-false} 00:11:25.317 }, 00:11:25.317 "method": "bdev_nvme_attach_controller" 00:11:25.317 } 00:11:25.317 EOF 00:11:25.317 )") 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:25.317 17:00:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:25.317 "params": { 00:11:25.317 "name": "Nvme0", 00:11:25.317 "trtype": "tcp", 00:11:25.317 "traddr": "10.0.0.2", 00:11:25.317 "adrfam": "ipv4", 00:11:25.317 "trsvcid": "4420", 00:11:25.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:25.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:25.317 "hdgst": false, 00:11:25.317 "ddgst": false 00:11:25.317 }, 00:11:25.317 "method": "bdev_nvme_attach_controller" 00:11:25.317 }' 00:11:25.317 [2024-07-12 17:00:24.801942] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:25.317 [2024-07-12 17:00:24.802044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079586 ] 00:11:25.317 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.317 [2024-07-12 17:00:24.864071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.317 [2024-07-12 17:00:24.977761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.575 Running I/O for 1 seconds... 00:11:26.950 00:11:26.950 Latency(us) 00:11:26.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.950 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:26.950 Verification LBA range: start 0x0 length 0x400 00:11:26.950 Nvme0n1 : 1.03 1611.83 100.74 0.00 0.00 39076.39 7281.78 33981.63 00:11:26.950 =================================================================================================================== 00:11:26.950 Total : 1611.83 100.74 0.00 0.00 39076.39 7281.78 33981.63 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.950 rmmod nvme_tcp 00:11:26.950 rmmod nvme_fabrics 00:11:26.950 rmmod nvme_keyring 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1079262 ']' 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1079262 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1079262 ']' 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1079262 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1079262 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1079262' 00:11:26.950 killing process with pid 1079262 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1079262 00:11:26.950 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1079262 00:11:27.209 [2024-07-12 17:00:26.895289] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.467 17:00:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.373 17:00:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.373 17:00:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:29.373 00:11:29.373 real 0m9.138s 00:11:29.373 user 0m21.301s 00:11:29.373 sys 0m2.944s 00:11:29.373 17:00:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.373 17:00:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.373 ************************************ 00:11:29.373 END TEST nvmf_host_management 00:11:29.373 ************************************ 00:11:29.373 17:00:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.373 17:00:28 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:29.373 17:00:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.373 17:00:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.373 17:00:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.373 ************************************ 00:11:29.373 START TEST nvmf_lvol 00:11:29.373 ************************************ 00:11:29.373 17:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:29.631 * Looking for test storage... 00:11:29.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.631 17:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.632 17:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:31.532 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:31.532 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:31.532 Found net devices under 0000:84:00.0: cvl_0_0 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.532 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:31.533 Found net devices under 0000:84:00.1: cvl_0_1 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.533 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.792 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:11:31.793 00:11:31.793 --- 10.0.0.2 ping statistics --- 00:11:31.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.793 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:31.793 00:11:31.793 --- 10.0.0.1 ping statistics --- 00:11:31.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.793 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1081805 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1081805 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1081805 ']' 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.793 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.793 [2024-07-12 17:00:31.409849] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:31.793 [2024-07-12 17:00:31.409922] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.793 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.793 [2024-07-12 17:00:31.473328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.051 [2024-07-12 17:00:31.585060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.051 [2024-07-12 17:00:31.585126] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.051 [2024-07-12 17:00:31.585154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.051 [2024-07-12 17:00:31.585166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.051 [2024-07-12 17:00:31.585176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.051 [2024-07-12 17:00:31.585266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.051 [2024-07-12 17:00:31.585332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.051 [2024-07-12 17:00:31.585336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.051 17:00:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.308 [2024-07-12 17:00:31.950645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.308 17:00:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:32.877 17:00:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:32.877 17:00:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.134 17:00:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:33.134 17:00:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:33.390 17:00:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:33.646 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c9c47eed-f088-44d3-867d-cf74e21b8168 00:11:33.646 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9c47eed-f088-44d3-867d-cf74e21b8168 lvol 20 00:11:33.903 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=69386f6e-746f-40de-9e4b-22da6a4b7115 00:11:33.903 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:34.159 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69386f6e-746f-40de-9e4b-22da6a4b7115 00:11:34.424 17:00:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:34.680 [2024-07-12 17:00:34.217937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.680 17:00:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.938 17:00:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1082231 00:11:34.938 17:00:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:34.938 17:00:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:34.938 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.872 17:00:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 69386f6e-746f-40de-9e4b-22da6a4b7115 MY_SNAPSHOT 00:11:36.441 17:00:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b5cee51e-d6c7-4778-a90c-9a2be935b622 00:11:36.441 17:00:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 69386f6e-746f-40de-9e4b-22da6a4b7115 30 00:11:36.698 17:00:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b5cee51e-d6c7-4778-a90c-9a2be935b622 MY_CLONE 00:11:36.955 17:00:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e35a590a-0a38-4fc8-847e-8d5d47018389 00:11:36.955 17:00:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e35a590a-0a38-4fc8-847e-8d5d47018389 00:11:37.522 17:00:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1082231 00:11:45.642 Initializing NVMe Controllers 00:11:45.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:45.642 Controller IO queue size 128, less than required. 00:11:45.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:45.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:45.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:45.642 Initialization complete. Launching workers. 00:11:45.642 ======================================================== 00:11:45.642 Latency(us) 00:11:45.642 Device Information : IOPS MiB/s Average min max 00:11:45.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10472.00 40.91 12229.50 287.08 90257.60 00:11:45.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10442.80 40.79 12262.92 2259.67 67787.72 00:11:45.642 ======================================================== 00:11:45.642 Total : 20914.80 81.70 12246.19 287.08 90257.60 00:11:45.642 00:11:45.642 17:00:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.642 17:00:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 69386f6e-746f-40de-9e4b-22da6a4b7115 00:11:45.900 17:00:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9c47eed-f088-44d3-867d-cf74e21b8168 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.159 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.159 rmmod nvme_tcp 00:11:46.159 rmmod nvme_fabrics 00:11:46.418 rmmod nvme_keyring 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1081805 ']' 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1081805 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1081805 ']' 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1081805 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081805 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081805' 00:11:46.418 killing process with pid 1081805 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1081805 00:11:46.418 17:00:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1081805 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.676 17:00:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.205 00:11:49.205 real 0m19.260s 00:11:49.205 user 1m5.380s 00:11:49.205 sys 0m6.016s 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.205 ************************************ 00:11:49.205 END TEST nvmf_lvol 00:11:49.205 ************************************ 00:11:49.205 17:00:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:49.205 17:00:48 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:49.205 17:00:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:49.205 17:00:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.205 17:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:49.205 ************************************ 00:11:49.205 START TEST nvmf_lvs_grow 00:11:49.205 ************************************ 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:49.205 * Looking for test storage... 00:11:49.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.205 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.206 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.206 17:00:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.206 17:00:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.110 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:51.111 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:51.111 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:51.111 Found net devices under 0000:84:00.0: cvl_0_0 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:51.111 Found net devices under 0000:84:00.1: cvl_0_1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:51.111 00:11:51.111 --- 10.0.0.2 ping statistics --- 00:11:51.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.111 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:51.111 00:11:51.111 --- 10.0.0.1 ping statistics --- 00:11:51.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.111 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1085511 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1085511 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1085511 ']' 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.111 17:00:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.111 [2024-07-12 17:00:50.711129] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:51.111 [2024-07-12 17:00:50.711208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.111 [2024-07-12 17:00:50.777700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.369 [2024-07-12 17:00:50.889373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.370 [2024-07-12 17:00:50.889433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.370 [2024-07-12 17:00:50.889447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.370 [2024-07-12 17:00:50.889459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.370 [2024-07-12 17:00:50.889468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.370 [2024-07-12 17:00:50.889494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.370 17:00:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:51.627 [2024-07-12 17:00:51.297133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.627 17:00:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:51.627 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:51.627 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.627 17:00:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.886 ************************************ 00:11:51.886 START TEST lvs_grow_clean 00:11:51.886 ************************************ 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.887 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:52.147 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:52.147 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:52.408 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=03c6bf5f-dae4-4947-8abb-617d01f77de7 00:11:52.408 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:11:52.408 17:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 lvol 150 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=39dfcf8f-5481-4900-ab0e-8ec7ea59f33c 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:52.679 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:52.936 [2024-07-12 17:00:52.589885] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:52.936 [2024-07-12 17:00:52.589974] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:52.936 true 00:11:52.936 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:11:52.936 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:53.203 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:53.203 17:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:53.513 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39dfcf8f-5481-4900-ab0e-8ec7ea59f33c 00:11:53.787 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:54.043 [2024-07-12 17:00:53.580886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.043 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1085953 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1085953 /var/tmp/bdevperf.sock 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1085953 ']' 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:54.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.299 17:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:54.299 [2024-07-12 17:00:53.894177] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:11:54.299 [2024-07-12 17:00:53.894260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085953 ] 00:11:54.299 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.299 [2024-07-12 17:00:53.953295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.556 [2024-07-12 17:00:54.063208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.556 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.556 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:54.556 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:55.121 Nvme0n1 00:11:55.121 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:55.121 [ 00:11:55.121 { 00:11:55.121 "name": "Nvme0n1", 00:11:55.121 "aliases": [ 00:11:55.121 "39dfcf8f-5481-4900-ab0e-8ec7ea59f33c" 00:11:55.121 ], 00:11:55.121 "product_name": "NVMe disk", 00:11:55.121 "block_size": 4096, 00:11:55.121 "num_blocks": 38912, 00:11:55.121 "uuid": "39dfcf8f-5481-4900-ab0e-8ec7ea59f33c", 00:11:55.121 "assigned_rate_limits": { 00:11:55.121 "rw_ios_per_sec": 0, 00:11:55.121 "rw_mbytes_per_sec": 0, 00:11:55.121 "r_mbytes_per_sec": 0, 00:11:55.121 "w_mbytes_per_sec": 0 00:11:55.121 }, 00:11:55.121 "claimed": false, 00:11:55.121 "zoned": false, 00:11:55.121 "supported_io_types": { 00:11:55.121 "read": true, 00:11:55.121 "write": true, 00:11:55.121 "unmap": true, 00:11:55.121 "flush": true, 00:11:55.121 "reset": true, 00:11:55.121 "nvme_admin": true, 00:11:55.121 "nvme_io": true, 00:11:55.121 "nvme_io_md": false, 00:11:55.121 "write_zeroes": true, 00:11:55.121 "zcopy": false, 00:11:55.121 "get_zone_info": false, 00:11:55.121 "zone_management": false, 00:11:55.121 "zone_append": false, 00:11:55.121 "compare": true, 00:11:55.121 "compare_and_write": true, 00:11:55.121 "abort": true, 00:11:55.121 "seek_hole": false, 00:11:55.121 "seek_data": false, 00:11:55.121 "copy": true, 00:11:55.121 "nvme_iov_md": false 00:11:55.121 }, 00:11:55.121 "memory_domains": [ 00:11:55.121 { 00:11:55.121 "dma_device_id": "system", 00:11:55.121 "dma_device_type": 1 00:11:55.121 } 00:11:55.121 ], 00:11:55.121 "driver_specific": { 00:11:55.121 "nvme": [ 00:11:55.121 { 00:11:55.121 "trid": { 00:11:55.121 "trtype": "TCP", 00:11:55.121 "adrfam": "IPv4", 00:11:55.121 "traddr": "10.0.0.2", 00:11:55.121 "trsvcid": "4420", 00:11:55.121 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:55.121 }, 00:11:55.121 "ctrlr_data": { 00:11:55.121 "cntlid": 1, 00:11:55.121 "vendor_id": "0x8086", 00:11:55.121 "model_number": "SPDK bdev Controller", 00:11:55.121 "serial_number": "SPDK0", 00:11:55.121 "firmware_revision": "24.09", 00:11:55.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:55.121 "oacs": { 00:11:55.121 "security": 0, 00:11:55.121 "format": 0, 00:11:55.121 "firmware": 0, 00:11:55.121 "ns_manage": 0 00:11:55.121 }, 00:11:55.121 "multi_ctrlr": true, 00:11:55.121 "ana_reporting": false 00:11:55.121 }, 00:11:55.121 "vs": { 00:11:55.121 "nvme_version": "1.3" 00:11:55.121 }, 00:11:55.121 "ns_data": { 00:11:55.121 "id": 1, 00:11:55.121 "can_share": true 00:11:55.121 } 00:11:55.121 } 00:11:55.121 ], 00:11:55.121 "mp_policy": "active_passive" 00:11:55.121 } 00:11:55.121 } 00:11:55.121 ] 00:11:55.121 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1086087 00:11:55.121 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:55.121 17:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:55.379 Running I/O for 10 seconds... 00:11:56.310 Latency(us) 00:11:56.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.310 Nvme0n1 : 1.00 16575.00 64.75 0.00 0.00 0.00 0.00 0.00 00:11:56.310 =================================================================================================================== 00:11:56.310 Total : 16575.00 64.75 0.00 0.00 0.00 0.00 0.00 00:11:56.310 00:11:57.241 17:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:11:57.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.242 Nvme0n1 : 2.00 16815.00 65.68 0.00 0.00 0.00 0.00 0.00 00:11:57.242 =================================================================================================================== 00:11:57.242 Total : 16815.00 65.68 0.00 0.00 0.00 0.00 0.00 00:11:57.242 00:11:57.499 true 00:11:57.499 17:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:11:57.499 17:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:57.755 17:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:57.755 17:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:57.755 17:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1086087 00:11:58.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.318 Nvme0n1 : 3.00 16969.33 66.29 0.00 0.00 0.00 0.00 0.00 00:11:58.318 =================================================================================================================== 00:11:58.318 Total : 16969.33 66.29 0.00 0.00 0.00 0.00 0.00 00:11:58.318 00:11:59.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.257 Nvme0n1 : 4.00 17124.50 66.89 0.00 0.00 0.00 0.00 0.00 00:11:59.257 =================================================================================================================== 00:11:59.257 Total : 17124.50 66.89 0.00 0.00 0.00 0.00 0.00 00:11:59.257 00:12:00.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.637 Nvme0n1 : 5.00 17184.40 67.13 0.00 0.00 0.00 0.00 0.00 00:12:00.637 =================================================================================================================== 00:12:00.637 Total : 17184.40 67.13 0.00 0.00 0.00 0.00 0.00 00:12:00.637 00:12:01.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.569 Nvme0n1 : 6.00 17252.50 67.39 0.00 0.00 0.00 0.00 0.00 00:12:01.569 =================================================================================================================== 00:12:01.569 Total : 17252.50 67.39 0.00 0.00 0.00 0.00 0.00 00:12:01.569 00:12:02.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.502 Nvme0n1 : 7.00 17301.00 67.58 0.00 0.00 0.00 0.00 0.00 00:12:02.502 =================================================================================================================== 00:12:02.502 Total : 17301.00 67.58 0.00 0.00 0.00 0.00 0.00 00:12:02.502 00:12:03.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.431 Nvme0n1 : 8.00 17377.25 67.88 0.00 0.00 0.00 0.00 0.00 00:12:03.431 =================================================================================================================== 00:12:03.431 Total : 17377.25 67.88 0.00 0.00 0.00 0.00 0.00 00:12:03.431 00:12:04.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.359 Nvme0n1 : 9.00 17433.33 68.10 0.00 0.00 0.00 0.00 0.00 00:12:04.359 =================================================================================================================== 00:12:04.359 Total : 17433.33 68.10 0.00 0.00 0.00 0.00 0.00 00:12:04.359 00:12:05.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.289 Nvme0n1 : 10.00 17468.10 68.23 0.00 0.00 0.00 0.00 0.00 00:12:05.289 =================================================================================================================== 00:12:05.289 Total : 17468.10 68.23 0.00 0.00 0.00 0.00 0.00 00:12:05.289 00:12:05.289 00:12:05.289 Latency(us) 00:12:05.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.289 Nvme0n1 : 10.01 17470.00 68.24 0.00 0.00 7323.21 3907.89 14466.47 00:12:05.289 =================================================================================================================== 00:12:05.289 Total : 17470.00 68.24 0.00 0.00 7323.21 3907.89 14466.47 00:12:05.289 0 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1085953 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1085953 ']' 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1085953 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1085953 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1085953' 00:12:05.289 killing process with pid 1085953 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1085953 00:12:05.289 Received shutdown signal, test time was about 10.000000 seconds 00:12:05.289 00:12:05.289 Latency(us) 00:12:05.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.289 =================================================================================================================== 00:12:05.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:05.289 17:01:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1085953 00:12:05.547 17:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:06.112 17:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:06.369 17:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:06.369 17:01:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:06.626 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:06.626 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:06.626 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.883 [2024-07-12 17:01:06.323476] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.883 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:07.140 request: 00:12:07.140 { 00:12:07.140 "uuid": "03c6bf5f-dae4-4947-8abb-617d01f77de7", 00:12:07.140 "method": "bdev_lvol_get_lvstores", 00:12:07.140 "req_id": 1 00:12:07.140 } 00:12:07.140 Got JSON-RPC error response 00:12:07.140 response: 00:12:07.140 { 00:12:07.140 "code": -19, 00:12:07.140 "message": "No such device" 00:12:07.140 } 00:12:07.140 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:07.141 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.141 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.141 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.141 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.398 aio_bdev 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 39dfcf8f-5481-4900-ab0e-8ec7ea59f33c 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=39dfcf8f-5481-4900-ab0e-8ec7ea59f33c 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.398 17:01:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.656 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 39dfcf8f-5481-4900-ab0e-8ec7ea59f33c -t 2000 00:12:07.914 [ 00:12:07.914 { 00:12:07.914 "name": "39dfcf8f-5481-4900-ab0e-8ec7ea59f33c", 00:12:07.914 "aliases": [ 00:12:07.914 "lvs/lvol" 00:12:07.914 ], 00:12:07.914 "product_name": "Logical Volume", 00:12:07.914 "block_size": 4096, 00:12:07.914 "num_blocks": 38912, 00:12:07.914 "uuid": "39dfcf8f-5481-4900-ab0e-8ec7ea59f33c", 00:12:07.914 "assigned_rate_limits": { 00:12:07.914 "rw_ios_per_sec": 0, 00:12:07.914 "rw_mbytes_per_sec": 0, 00:12:07.914 "r_mbytes_per_sec": 0, 00:12:07.914 "w_mbytes_per_sec": 0 00:12:07.914 }, 00:12:07.914 "claimed": false, 00:12:07.914 "zoned": false, 00:12:07.914 "supported_io_types": { 00:12:07.914 "read": true, 00:12:07.914 "write": true, 00:12:07.914 "unmap": true, 00:12:07.914 "flush": false, 00:12:07.914 "reset": true, 00:12:07.914 "nvme_admin": false, 00:12:07.914 "nvme_io": false, 00:12:07.914 "nvme_io_md": false, 00:12:07.914 "write_zeroes": true, 00:12:07.914 "zcopy": false, 00:12:07.914 "get_zone_info": false, 00:12:07.914 "zone_management": false, 00:12:07.914 "zone_append": false, 00:12:07.914 "compare": false, 00:12:07.914 "compare_and_write": false, 00:12:07.914 "abort": false, 00:12:07.914 "seek_hole": true, 00:12:07.914 "seek_data": true, 00:12:07.914 "copy": false, 00:12:07.914 "nvme_iov_md": false 00:12:07.914 }, 00:12:07.914 "driver_specific": { 00:12:07.914 "lvol": { 00:12:07.914 "lvol_store_uuid": "03c6bf5f-dae4-4947-8abb-617d01f77de7", 00:12:07.914 "base_bdev": "aio_bdev", 00:12:07.914 "thin_provision": false, 00:12:07.914 "num_allocated_clusters": 38, 00:12:07.914 "snapshot": false, 00:12:07.914 "clone": false, 00:12:07.914 "esnap_clone": false 00:12:07.914 } 00:12:07.914 } 00:12:07.914 } 00:12:07.914 ] 00:12:07.914 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:07.914 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:07.914 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:08.173 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:08.173 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:08.173 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.430 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.430 17:01:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39dfcf8f-5481-4900-ab0e-8ec7ea59f33c 00:12:08.694 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03c6bf5f-dae4-4947-8abb-617d01f77de7 00:12:08.956 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.213 00:12:09.213 real 0m17.367s 00:12:09.213 user 0m16.778s 00:12:09.213 sys 0m1.960s 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:09.213 ************************************ 00:12:09.213 END TEST lvs_grow_clean 00:12:09.213 ************************************ 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.213 ************************************ 00:12:09.213 START TEST lvs_grow_dirty 00:12:09.213 ************************************ 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.213 17:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.471 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:09.471 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:09.729 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2434bdb3-cd82-4a72-840c-627196420e1e 00:12:09.729 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:09.729 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:09.987 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:09.987 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:09.987 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2434bdb3-cd82-4a72-840c-627196420e1e lvol 150 00:12:10.244 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:10.244 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.244 17:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:10.502 [2024-07-12 17:01:10.041861] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:10.502 [2024-07-12 17:01:10.041966] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:10.502 true 00:12:10.502 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:10.502 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:10.774 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:10.774 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:11.032 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:11.290 17:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:11.548 [2024-07-12 17:01:11.004793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.548 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1088004 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1088004 /var/tmp/bdevperf.sock 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1088004 ']' 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.806 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:11.806 [2024-07-12 17:01:11.298537] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:11.806 [2024-07-12 17:01:11.298625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088004 ] 00:12:11.806 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.806 [2024-07-12 17:01:11.356761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.806 [2024-07-12 17:01:11.463400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.063 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.063 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:12.063 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:12.321 Nvme0n1 00:12:12.321 17:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:12.579 [ 00:12:12.579 { 00:12:12.579 "name": "Nvme0n1", 00:12:12.579 "aliases": [ 00:12:12.579 "a5151605-78b5-452b-b8de-ebb9801b6f99" 00:12:12.579 ], 00:12:12.579 "product_name": "NVMe disk", 00:12:12.579 "block_size": 4096, 00:12:12.579 "num_blocks": 38912, 00:12:12.579 "uuid": "a5151605-78b5-452b-b8de-ebb9801b6f99", 00:12:12.579 "assigned_rate_limits": { 00:12:12.579 "rw_ios_per_sec": 0, 00:12:12.579 "rw_mbytes_per_sec": 0, 00:12:12.579 "r_mbytes_per_sec": 0, 00:12:12.579 "w_mbytes_per_sec": 0 00:12:12.579 }, 00:12:12.579 "claimed": false, 00:12:12.579 "zoned": false, 00:12:12.579 "supported_io_types": { 00:12:12.579 "read": true, 00:12:12.579 "write": true, 00:12:12.579 "unmap": true, 00:12:12.579 "flush": true, 00:12:12.579 "reset": true, 00:12:12.579 "nvme_admin": true, 00:12:12.579 "nvme_io": true, 00:12:12.579 "nvme_io_md": false, 00:12:12.579 "write_zeroes": true, 00:12:12.579 "zcopy": false, 00:12:12.579 "get_zone_info": false, 00:12:12.579 "zone_management": false, 00:12:12.579 "zone_append": false, 00:12:12.579 "compare": true, 00:12:12.579 "compare_and_write": true, 00:12:12.579 "abort": true, 00:12:12.579 "seek_hole": false, 00:12:12.579 "seek_data": false, 00:12:12.579 "copy": true, 00:12:12.579 "nvme_iov_md": false 00:12:12.579 }, 00:12:12.579 "memory_domains": [ 00:12:12.579 { 00:12:12.579 "dma_device_id": "system", 00:12:12.579 "dma_device_type": 1 00:12:12.579 } 00:12:12.579 ], 00:12:12.579 "driver_specific": { 00:12:12.579 "nvme": [ 00:12:12.579 { 00:12:12.579 "trid": { 00:12:12.579 "trtype": "TCP", 00:12:12.579 "adrfam": "IPv4", 00:12:12.579 "traddr": "10.0.0.2", 00:12:12.579 "trsvcid": "4420", 00:12:12.579 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:12.579 }, 00:12:12.579 "ctrlr_data": { 00:12:12.579 "cntlid": 1, 00:12:12.579 "vendor_id": "0x8086", 00:12:12.579 "model_number": "SPDK bdev Controller", 00:12:12.579 "serial_number": "SPDK0", 00:12:12.579 "firmware_revision": "24.09", 00:12:12.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:12.579 "oacs": { 00:12:12.579 "security": 0, 00:12:12.579 "format": 0, 00:12:12.579 "firmware": 0, 00:12:12.579 "ns_manage": 0 00:12:12.579 }, 00:12:12.579 "multi_ctrlr": true, 00:12:12.579 "ana_reporting": false 00:12:12.579 }, 00:12:12.579 "vs": { 00:12:12.579 "nvme_version": "1.3" 00:12:12.579 }, 00:12:12.579 "ns_data": { 00:12:12.579 "id": 1, 00:12:12.579 "can_share": true 00:12:12.579 } 00:12:12.579 } 00:12:12.579 ], 00:12:12.579 "mp_policy": "active_passive" 00:12:12.579 } 00:12:12.579 } 00:12:12.579 ] 00:12:12.579 17:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1088141 00:12:12.579 17:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:12.579 17:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:12.837 Running I/O for 10 seconds... 00:12:13.769 Latency(us) 00:12:13.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.769 Nvme0n1 : 1.00 16839.00 65.78 0.00 0.00 0.00 0.00 0.00 00:12:13.769 =================================================================================================================== 00:12:13.769 Total : 16839.00 65.78 0.00 0.00 0.00 0.00 0.00 00:12:13.769 00:12:14.701 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:14.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.701 Nvme0n1 : 2.00 16960.00 66.25 0.00 0.00 0.00 0.00 0.00 00:12:14.701 =================================================================================================================== 00:12:14.701 Total : 16960.00 66.25 0.00 0.00 0.00 0.00 0.00 00:12:14.701 00:12:14.961 true 00:12:14.961 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:14.961 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:15.256 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:15.256 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:15.256 17:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1088141 00:12:15.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.837 Nvme0n1 : 3.00 17069.67 66.68 0.00 0.00 0.00 0.00 0.00 00:12:15.837 =================================================================================================================== 00:12:15.837 Total : 17069.67 66.68 0.00 0.00 0.00 0.00 0.00 00:12:15.837 00:12:16.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.776 Nvme0n1 : 4.00 17168.50 67.06 0.00 0.00 0.00 0.00 0.00 00:12:16.776 =================================================================================================================== 00:12:16.776 Total : 17168.50 67.06 0.00 0.00 0.00 0.00 0.00 00:12:16.776 00:12:17.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.713 Nvme0n1 : 5.00 17261.00 67.43 0.00 0.00 0.00 0.00 0.00 00:12:17.713 =================================================================================================================== 00:12:17.713 Total : 17261.00 67.43 0.00 0.00 0.00 0.00 0.00 00:12:17.713 00:12:18.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.647 Nvme0n1 : 6.00 17294.83 67.56 0.00 0.00 0.00 0.00 0.00 00:12:18.647 =================================================================================================================== 00:12:18.647 Total : 17294.83 67.56 0.00 0.00 0.00 0.00 0.00 00:12:18.647 00:12:20.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.020 Nvme0n1 : 7.00 17346.43 67.76 0.00 0.00 0.00 0.00 0.00 00:12:20.020 =================================================================================================================== 00:12:20.020 Total : 17346.43 67.76 0.00 0.00 0.00 0.00 0.00 00:12:20.020 00:12:20.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.952 Nvme0n1 : 8.00 17392.88 67.94 0.00 0.00 0.00 0.00 0.00 00:12:20.952 =================================================================================================================== 00:12:20.952 Total : 17392.88 67.94 0.00 0.00 0.00 0.00 0.00 00:12:20.952 00:12:21.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.887 Nvme0n1 : 9.00 17402.67 67.98 0.00 0.00 0.00 0.00 0.00 00:12:21.887 =================================================================================================================== 00:12:21.887 Total : 17402.67 67.98 0.00 0.00 0.00 0.00 0.00 00:12:21.887 00:12:22.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.820 Nvme0n1 : 10.00 17442.50 68.13 0.00 0.00 0.00 0.00 0.00 00:12:22.820 =================================================================================================================== 00:12:22.820 Total : 17442.50 68.13 0.00 0.00 0.00 0.00 0.00 00:12:22.820 00:12:22.820 00:12:22.820 Latency(us) 00:12:22.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.820 Nvme0n1 : 10.01 17441.23 68.13 0.00 0.00 7334.86 2135.99 14563.56 00:12:22.820 =================================================================================================================== 00:12:22.820 Total : 17441.23 68.13 0.00 0.00 7334.86 2135.99 14563.56 00:12:22.820 0 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1088004 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1088004 ']' 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1088004 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1088004 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1088004' 00:12:22.820 killing process with pid 1088004 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1088004 00:12:22.820 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.820 00:12:22.820 Latency(us) 00:12:22.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.820 =================================================================================================================== 00:12:22.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.820 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1088004 00:12:23.077 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.335 17:01:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:23.592 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:23.592 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:23.850 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:23.850 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:23.850 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1085511 00:12:23.850 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1085511 00:12:24.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1085511 Killed "${NVMF_APP[@]}" "$@" 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1089476 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1089476 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1089476 ']' 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.115 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.115 [2024-07-12 17:01:23.602007] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:24.115 [2024-07-12 17:01:23.602104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.115 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.115 [2024-07-12 17:01:23.667622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.115 [2024-07-12 17:01:23.777054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.115 [2024-07-12 17:01:23.777118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.115 [2024-07-12 17:01:23.777147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.115 [2024-07-12 17:01:23.777158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.115 [2024-07-12 17:01:23.777168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.115 [2024-07-12 17:01:23.777195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.375 17:01:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:24.632 [2024-07-12 17:01:24.139017] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:24.632 [2024-07-12 17:01:24.139168] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:24.632 [2024-07-12 17:01:24.139223] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:24.632 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:24.888 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a5151605-78b5-452b-b8de-ebb9801b6f99 -t 2000 00:12:25.144 [ 00:12:25.144 { 00:12:25.144 "name": "a5151605-78b5-452b-b8de-ebb9801b6f99", 00:12:25.144 "aliases": [ 00:12:25.144 "lvs/lvol" 00:12:25.144 ], 00:12:25.144 "product_name": "Logical Volume", 00:12:25.144 "block_size": 4096, 00:12:25.144 "num_blocks": 38912, 00:12:25.144 "uuid": "a5151605-78b5-452b-b8de-ebb9801b6f99", 00:12:25.144 "assigned_rate_limits": { 00:12:25.144 "rw_ios_per_sec": 0, 00:12:25.144 "rw_mbytes_per_sec": 0, 00:12:25.144 "r_mbytes_per_sec": 0, 00:12:25.144 "w_mbytes_per_sec": 0 00:12:25.144 }, 00:12:25.144 "claimed": false, 00:12:25.144 "zoned": false, 00:12:25.144 "supported_io_types": { 00:12:25.144 "read": true, 00:12:25.144 "write": true, 00:12:25.144 "unmap": true, 00:12:25.144 "flush": false, 00:12:25.144 "reset": true, 00:12:25.144 "nvme_admin": false, 00:12:25.144 "nvme_io": false, 00:12:25.144 "nvme_io_md": false, 00:12:25.144 "write_zeroes": true, 00:12:25.144 "zcopy": false, 00:12:25.144 "get_zone_info": false, 00:12:25.144 "zone_management": false, 00:12:25.144 "zone_append": false, 00:12:25.144 "compare": false, 00:12:25.144 "compare_and_write": false, 00:12:25.144 "abort": false, 00:12:25.144 "seek_hole": true, 00:12:25.144 "seek_data": true, 00:12:25.144 "copy": false, 00:12:25.144 "nvme_iov_md": false 00:12:25.144 }, 00:12:25.144 "driver_specific": { 00:12:25.144 "lvol": { 00:12:25.144 "lvol_store_uuid": "2434bdb3-cd82-4a72-840c-627196420e1e", 00:12:25.144 "base_bdev": "aio_bdev", 00:12:25.144 "thin_provision": false, 00:12:25.144 "num_allocated_clusters": 38, 00:12:25.144 "snapshot": false, 00:12:25.144 "clone": false, 00:12:25.144 "esnap_clone": false 00:12:25.144 } 00:12:25.144 } 00:12:25.144 } 00:12:25.144 ] 00:12:25.144 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:25.144 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:25.144 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:25.400 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:25.400 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:25.400 17:01:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:25.657 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:25.657 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.913 [2024-07-12 17:01:25.364025] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:25.913 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:26.170 request: 00:12:26.170 { 00:12:26.170 "uuid": "2434bdb3-cd82-4a72-840c-627196420e1e", 00:12:26.170 "method": "bdev_lvol_get_lvstores", 00:12:26.170 "req_id": 1 00:12:26.170 } 00:12:26.170 Got JSON-RPC error response 00:12:26.170 response: 00:12:26.170 { 00:12:26.170 "code": -19, 00:12:26.170 "message": "No such device" 00:12:26.170 } 00:12:26.170 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:26.170 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.170 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.171 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.171 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.428 aio_bdev 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.428 17:01:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.687 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a5151605-78b5-452b-b8de-ebb9801b6f99 -t 2000 00:12:26.943 [ 00:12:26.943 { 00:12:26.943 "name": "a5151605-78b5-452b-b8de-ebb9801b6f99", 00:12:26.943 "aliases": [ 00:12:26.943 "lvs/lvol" 00:12:26.943 ], 00:12:26.943 "product_name": "Logical Volume", 00:12:26.943 "block_size": 4096, 00:12:26.943 "num_blocks": 38912, 00:12:26.943 "uuid": "a5151605-78b5-452b-b8de-ebb9801b6f99", 00:12:26.943 "assigned_rate_limits": { 00:12:26.943 "rw_ios_per_sec": 0, 00:12:26.943 "rw_mbytes_per_sec": 0, 00:12:26.943 "r_mbytes_per_sec": 0, 00:12:26.943 "w_mbytes_per_sec": 0 00:12:26.943 }, 00:12:26.943 "claimed": false, 00:12:26.943 "zoned": false, 00:12:26.943 "supported_io_types": { 00:12:26.943 "read": true, 00:12:26.943 "write": true, 00:12:26.943 "unmap": true, 00:12:26.943 "flush": false, 00:12:26.943 "reset": true, 00:12:26.943 "nvme_admin": false, 00:12:26.943 "nvme_io": false, 00:12:26.943 "nvme_io_md": false, 00:12:26.943 "write_zeroes": true, 00:12:26.943 "zcopy": false, 00:12:26.943 "get_zone_info": false, 00:12:26.943 "zone_management": false, 00:12:26.943 "zone_append": false, 00:12:26.943 "compare": false, 00:12:26.943 "compare_and_write": false, 00:12:26.943 "abort": false, 00:12:26.943 "seek_hole": true, 00:12:26.943 "seek_data": true, 00:12:26.943 "copy": false, 00:12:26.943 "nvme_iov_md": false 00:12:26.943 }, 00:12:26.943 "driver_specific": { 00:12:26.943 "lvol": { 00:12:26.943 "lvol_store_uuid": "2434bdb3-cd82-4a72-840c-627196420e1e", 00:12:26.943 "base_bdev": "aio_bdev", 00:12:26.943 "thin_provision": false, 00:12:26.943 "num_allocated_clusters": 38, 00:12:26.943 "snapshot": false, 00:12:26.943 "clone": false, 00:12:26.943 "esnap_clone": false 00:12:26.943 } 00:12:26.943 } 00:12:26.943 } 00:12:26.943 ] 00:12:26.943 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:26.943 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:26.943 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:27.200 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:27.200 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:27.200 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:27.468 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:27.468 17:01:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5151605-78b5-452b-b8de-ebb9801b6f99 00:12:27.728 17:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2434bdb3-cd82-4a72-840c-627196420e1e 00:12:27.985 17:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.242 17:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.242 00:12:28.242 real 0m18.998s 00:12:28.242 user 0m47.840s 00:12:28.242 sys 0m5.001s 00:12:28.242 17:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:28.243 ************************************ 00:12:28.243 END TEST lvs_grow_dirty 00:12:28.243 ************************************ 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:28.243 nvmf_trace.0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.243 rmmod nvme_tcp 00:12:28.243 rmmod nvme_fabrics 00:12:28.243 rmmod nvme_keyring 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1089476 ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1089476 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1089476 ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1089476 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089476 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089476' 00:12:28.243 killing process with pid 1089476 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1089476 00:12:28.243 17:01:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1089476 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.501 17:01:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.032 17:01:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.032 00:12:31.032 real 0m41.872s 00:12:31.032 user 1m10.375s 00:12:31.032 sys 0m8.892s 00:12:31.032 17:01:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.033 17:01:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 ************************************ 00:12:31.033 END TEST nvmf_lvs_grow 00:12:31.033 ************************************ 00:12:31.033 17:01:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.033 17:01:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:31.033 17:01:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.033 17:01:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.033 17:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.033 ************************************ 00:12:31.033 START TEST nvmf_bdev_io_wait 00:12:31.033 ************************************ 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:31.033 * Looking for test storage... 00:12:31.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.033 17:01:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:32.932 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:32.932 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:32.932 Found net devices under 0000:84:00.0: cvl_0_0 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:32.932 Found net devices under 0000:84:00.1: cvl_0_1 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.932 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:12:32.933 00:12:32.933 --- 10.0.0.2 ping statistics --- 00:12:32.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.933 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:12:32.933 00:12:32.933 --- 10.0.0.1 ping statistics --- 00:12:32.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.933 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1092006 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1092006 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1092006 ']' 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.933 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.190 [2024-07-12 17:01:32.644770] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:33.190 [2024-07-12 17:01:32.644844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.190 [2024-07-12 17:01:32.717058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.190 [2024-07-12 17:01:32.835034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.190 [2024-07-12 17:01:32.835089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.190 [2024-07-12 17:01:32.835104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.190 [2024-07-12 17:01:32.835116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.190 [2024-07-12 17:01:32.835126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.190 [2024-07-12 17:01:32.836761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.190 [2024-07-12 17:01:32.836822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.190 [2024-07-12 17:01:32.836800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.190 [2024-07-12 17:01:32.836825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.190 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.191 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 [2024-07-12 17:01:32.957623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 Malloc0 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.449 [2024-07-12 17:01:33.019053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1092038 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1092040 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1092042 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.449 { 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme$subsystem", 00:12:33.449 "trtype": "$TEST_TRANSPORT", 00:12:33.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "$NVMF_PORT", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.449 "hdgst": ${hdgst:-false}, 00:12:33.449 "ddgst": ${ddgst:-false} 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 } 00:12:33.449 EOF 00:12:33.449 )") 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.449 { 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme$subsystem", 00:12:33.449 "trtype": "$TEST_TRANSPORT", 00:12:33.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "$NVMF_PORT", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.449 "hdgst": ${hdgst:-false}, 00:12:33.449 "ddgst": ${ddgst:-false} 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 } 00:12:33.449 EOF 00:12:33.449 )") 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1092044 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.449 { 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme$subsystem", 00:12:33.449 "trtype": "$TEST_TRANSPORT", 00:12:33.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "$NVMF_PORT", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.449 "hdgst": ${hdgst:-false}, 00:12:33.449 "ddgst": ${ddgst:-false} 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 } 00:12:33.449 EOF 00:12:33.449 )") 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.449 { 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme$subsystem", 00:12:33.449 "trtype": "$TEST_TRANSPORT", 00:12:33.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "$NVMF_PORT", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.449 "hdgst": ${hdgst:-false}, 00:12:33.449 "ddgst": ${ddgst:-false} 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 } 00:12:33.449 EOF 00:12:33.449 )") 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1092038 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme1", 00:12:33.449 "trtype": "tcp", 00:12:33.449 "traddr": "10.0.0.2", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "4420", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.449 "hdgst": false, 00:12:33.449 "ddgst": false 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 }' 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme1", 00:12:33.449 "trtype": "tcp", 00:12:33.449 "traddr": "10.0.0.2", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "4420", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.449 "hdgst": false, 00:12:33.449 "ddgst": false 00:12:33.449 }, 00:12:33.449 "method": "bdev_nvme_attach_controller" 00:12:33.449 }' 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.449 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.449 "params": { 00:12:33.449 "name": "Nvme1", 00:12:33.449 "trtype": "tcp", 00:12:33.449 "traddr": "10.0.0.2", 00:12:33.449 "adrfam": "ipv4", 00:12:33.449 "trsvcid": "4420", 00:12:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.450 "hdgst": false, 00:12:33.450 "ddgst": false 00:12:33.450 }, 00:12:33.450 "method": "bdev_nvme_attach_controller" 00:12:33.450 }' 00:12:33.450 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.450 17:01:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.450 "params": { 00:12:33.450 "name": "Nvme1", 00:12:33.450 "trtype": "tcp", 00:12:33.450 "traddr": "10.0.0.2", 00:12:33.450 "adrfam": "ipv4", 00:12:33.450 "trsvcid": "4420", 00:12:33.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.450 "hdgst": false, 00:12:33.450 "ddgst": false 00:12:33.450 }, 00:12:33.450 "method": "bdev_nvme_attach_controller" 00:12:33.450 }' 00:12:33.450 [2024-07-12 17:01:33.066602] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:33.450 [2024-07-12 17:01:33.066602] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:33.450 [2024-07-12 17:01:33.066602] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:33.450 [2024-07-12 17:01:33.066687] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 17:01:33.066687] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 17:01:33.066687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:33.450 --proc-type=auto ] 00:12:33.450 --proc-type=auto ] 00:12:33.450 [2024-07-12 17:01:33.067833] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:33.450 [2024-07-12 17:01:33.067903] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:33.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.707 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.707 [2024-07-12 17:01:33.249886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.707 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.707 [2024-07-12 17:01:33.349326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:33.707 [2024-07-12 17:01:33.355097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.963 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.963 [2024-07-12 17:01:33.455121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.963 [2024-07-12 17:01:33.456441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:33.963 [2024-07-12 17:01:33.532428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.963 [2024-07-12 17:01:33.558469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:33.963 [2024-07-12 17:01:33.628840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:34.220 Running I/O for 1 seconds... 00:12:34.220 Running I/O for 1 seconds... 00:12:34.220 Running I/O for 1 seconds... 00:12:34.220 Running I/O for 1 seconds... 00:12:35.150 00:12:35.150 Latency(us) 00:12:35.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.150 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:35.150 Nvme1n1 : 1.00 200128.76 781.75 0.00 0.00 637.23 256.38 867.75 00:12:35.150 =================================================================================================================== 00:12:35.150 Total : 200128.76 781.75 0.00 0.00 637.23 256.38 867.75 00:12:35.150 00:12:35.150 Latency(us) 00:12:35.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.150 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:35.150 Nvme1n1 : 1.03 6834.55 26.70 0.00 0.00 18474.32 8204.14 31651.46 00:12:35.150 =================================================================================================================== 00:12:35.150 Total : 6834.55 26.70 0.00 0.00 18474.32 8204.14 31651.46 00:12:35.467 00:12:35.467 Latency(us) 00:12:35.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.467 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:35.467 Nvme1n1 : 1.01 6559.93 25.62 0.00 0.00 19441.29 6505.05 37671.06 00:12:35.467 =================================================================================================================== 00:12:35.467 Total : 6559.93 25.62 0.00 0.00 19441.29 6505.05 37671.06 00:12:35.467 00:12:35.467 Latency(us) 00:12:35.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.467 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:35.467 Nvme1n1 : 1.01 9279.55 36.25 0.00 0.00 13726.41 8883.77 24855.13 00:12:35.467 =================================================================================================================== 00:12:35.467 Total : 9279.55 36.25 0.00 0.00 13726.41 8883.77 24855.13 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1092040 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1092042 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1092044 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.724 rmmod nvme_tcp 00:12:35.724 rmmod nvme_fabrics 00:12:35.724 rmmod nvme_keyring 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1092006 ']' 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1092006 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1092006 ']' 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1092006 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1092006 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1092006' 00:12:35.724 killing process with pid 1092006 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1092006 00:12:35.724 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1092006 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.982 17:01:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.516 17:01:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.516 00:12:38.516 real 0m7.394s 00:12:38.516 user 0m17.613s 00:12:38.516 sys 0m3.411s 00:12:38.516 17:01:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.516 17:01:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 ************************************ 00:12:38.516 END TEST nvmf_bdev_io_wait 00:12:38.516 ************************************ 00:12:38.516 17:01:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:38.516 17:01:37 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:38.516 17:01:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:38.516 17:01:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.516 17:01:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 ************************************ 00:12:38.516 START TEST nvmf_queue_depth 00:12:38.516 ************************************ 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:38.517 * Looking for test storage... 00:12:38.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.517 17:01:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:40.462 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:40.462 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.462 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:40.463 Found net devices under 0000:84:00.0: cvl_0_0 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:40.463 Found net devices under 0000:84:00.1: cvl_0_1 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:40.463 17:01:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:12:40.463 00:12:40.463 --- 10.0.0.2 ping statistics --- 00:12:40.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.463 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:40.463 00:12:40.463 --- 10.0.0.1 ping statistics --- 00:12:40.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.463 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1094395 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1094395 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1094395 ']' 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.463 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.463 [2024-07-12 17:01:40.097814] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:40.463 [2024-07-12 17:01:40.097909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.463 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.721 [2024-07-12 17:01:40.163584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.721 [2024-07-12 17:01:40.271863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.721 [2024-07-12 17:01:40.271928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.721 [2024-07-12 17:01:40.271957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.721 [2024-07-12 17:01:40.271969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.721 [2024-07-12 17:01:40.271979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.721 [2024-07-12 17:01:40.272007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.721 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.721 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:40.721 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.721 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.721 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 [2024-07-12 17:01:40.419908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 Malloc0 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 [2024-07-12 17:01:40.479170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1094418 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1094418 /var/tmp/bdevperf.sock 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1094418 ']' 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.978 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.978 [2024-07-12 17:01:40.522103] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:12:40.978 [2024-07-12 17:01:40.522178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094418 ] 00:12:40.978 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.978 [2024-07-12 17:01:40.579522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.236 [2024-07-12 17:01:40.687079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.236 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.236 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:41.236 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:41.236 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.236 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.494 NVMe0n1 00:12:41.494 17:01:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.494 17:01:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:41.494 Running I/O for 10 seconds... 00:12:53.683 00:12:53.683 Latency(us) 00:12:53.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:53.683 Verification LBA range: start 0x0 length 0x4000 00:12:53.683 NVMe0n1 : 10.07 9744.15 38.06 0.00 0.00 104707.23 21068.61 64468.01 00:12:53.684 =================================================================================================================== 00:12:53.684 Total : 9744.15 38.06 0.00 0.00 104707.23 21068.61 64468.01 00:12:53.684 0 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1094418 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1094418 ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1094418 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1094418 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1094418' 00:12:53.684 killing process with pid 1094418 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1094418 00:12:53.684 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.684 00:12:53.684 Latency(us) 00:12:53.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.684 =================================================================================================================== 00:12:53.684 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1094418 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.684 rmmod nvme_tcp 00:12:53.684 rmmod nvme_fabrics 00:12:53.684 rmmod nvme_keyring 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1094395 ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1094395 ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1094395' 00:12:53.684 killing process with pid 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1094395 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.684 17:01:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.247 17:01:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.247 00:12:54.247 real 0m16.214s 00:12:54.247 user 0m22.469s 00:12:54.247 sys 0m3.399s 00:12:54.247 17:01:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.247 17:01:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.247 ************************************ 00:12:54.247 END TEST nvmf_queue_depth 00:12:54.247 ************************************ 00:12:54.247 17:01:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.247 17:01:53 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:54.247 17:01:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.247 17:01:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.247 17:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 ************************************ 00:12:54.506 START TEST nvmf_target_multipath 00:12:54.506 ************************************ 00:12:54.506 17:01:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:54.506 * Looking for test storage... 00:12:54.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.506 17:01:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.046 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:57.047 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:57.047 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:57.047 Found net devices under 0000:84:00.0: cvl_0_0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:57.047 Found net devices under 0000:84:00.1: cvl_0_1 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:12:57.047 00:12:57.047 --- 10.0.0.2 ping statistics --- 00:12:57.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.047 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:57.047 00:12:57.047 --- 10.0.0.1 ping statistics --- 00:12:57.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.047 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:57.047 only one NIC for nvmf test 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.047 rmmod nvme_tcp 00:12:57.047 rmmod nvme_fabrics 00:12:57.047 rmmod nvme_keyring 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.047 17:01:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.005 00:12:59.005 real 0m4.436s 00:12:59.005 user 0m0.830s 00:12:59.005 sys 0m1.616s 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.005 17:01:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:59.005 ************************************ 00:12:59.005 END TEST nvmf_target_multipath 00:12:59.005 ************************************ 00:12:59.005 17:01:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:59.005 17:01:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:59.005 17:01:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.005 17:01:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.005 17:01:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.005 ************************************ 00:12:59.005 START TEST nvmf_zcopy 00:12:59.005 ************************************ 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:59.005 * Looking for test storage... 00:12:59.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:59.005 17:01:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:01.531 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:01.531 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:01.531 Found net devices under 0000:84:00.0: cvl_0_0 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:01.531 Found net devices under 0000:84:00.1: cvl_0_1 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:01.531 00:13:01.531 --- 10.0.0.2 ping statistics --- 00:13:01.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.531 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:01.531 00:13:01.531 --- 10.0.0.1 ping statistics --- 00:13:01.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.531 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.531 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1099629 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1099629 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1099629 ']' 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.532 17:02:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 [2024-07-12 17:02:00.802283] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:13:01.532 [2024-07-12 17:02:00.802359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.532 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.532 [2024-07-12 17:02:00.865249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.532 [2024-07-12 17:02:00.967875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.532 [2024-07-12 17:02:00.967925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.532 [2024-07-12 17:02:00.967956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.532 [2024-07-12 17:02:00.967969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.532 [2024-07-12 17:02:00.967981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.532 [2024-07-12 17:02:00.968008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 [2024-07-12 17:02:01.112045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 [2024-07-12 17:02:01.128252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 malloc0 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:01.532 { 00:13:01.532 "params": { 00:13:01.532 "name": "Nvme$subsystem", 00:13:01.532 "trtype": "$TEST_TRANSPORT", 00:13:01.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:01.532 "adrfam": "ipv4", 00:13:01.532 "trsvcid": "$NVMF_PORT", 00:13:01.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:01.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:01.532 "hdgst": ${hdgst:-false}, 00:13:01.532 "ddgst": ${ddgst:-false} 00:13:01.532 }, 00:13:01.532 "method": "bdev_nvme_attach_controller" 00:13:01.532 } 00:13:01.532 EOF 00:13:01.532 )") 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:01.532 17:02:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:01.532 "params": { 00:13:01.532 "name": "Nvme1", 00:13:01.532 "trtype": "tcp", 00:13:01.532 "traddr": "10.0.0.2", 00:13:01.532 "adrfam": "ipv4", 00:13:01.532 "trsvcid": "4420", 00:13:01.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.532 "hdgst": false, 00:13:01.532 "ddgst": false 00:13:01.532 }, 00:13:01.532 "method": "bdev_nvme_attach_controller" 00:13:01.532 }' 00:13:01.532 [2024-07-12 17:02:01.212106] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:13:01.532 [2024-07-12 17:02:01.212185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099656 ] 00:13:01.788 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.788 [2024-07-12 17:02:01.276655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.788 [2024-07-12 17:02:01.387322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.044 Running I/O for 10 seconds... 00:13:12.002 00:13:12.002 Latency(us) 00:13:12.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:12.002 Verification LBA range: start 0x0 length 0x1000 00:13:12.002 Nvme1n1 : 10.01 6458.43 50.46 0.00 0.00 19766.84 338.30 29321.29 00:13:12.002 =================================================================================================================== 00:13:12.002 Total : 6458.43 50.46 0.00 0.00 19766.84 338.30 29321.29 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1100910 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:12.260 { 00:13:12.260 "params": { 00:13:12.260 "name": "Nvme$subsystem", 00:13:12.260 "trtype": "$TEST_TRANSPORT", 00:13:12.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:12.260 "adrfam": "ipv4", 00:13:12.260 "trsvcid": "$NVMF_PORT", 00:13:12.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:12.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:12.260 "hdgst": ${hdgst:-false}, 00:13:12.260 "ddgst": ${ddgst:-false} 00:13:12.260 }, 00:13:12.260 "method": "bdev_nvme_attach_controller" 00:13:12.260 } 00:13:12.260 EOF 00:13:12.260 )") 00:13:12.260 [2024-07-12 17:02:11.912416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.912458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:12.260 17:02:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:12.260 "params": { 00:13:12.260 "name": "Nvme1", 00:13:12.260 "trtype": "tcp", 00:13:12.260 "traddr": "10.0.0.2", 00:13:12.260 "adrfam": "ipv4", 00:13:12.260 "trsvcid": "4420", 00:13:12.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:12.260 "hdgst": false, 00:13:12.260 "ddgst": false 00:13:12.260 }, 00:13:12.260 "method": "bdev_nvme_attach_controller" 00:13:12.260 }' 00:13:12.260 [2024-07-12 17:02:11.920359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.920384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.260 [2024-07-12 17:02:11.928391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.928413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.260 [2024-07-12 17:02:11.936394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.936415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.260 [2024-07-12 17:02:11.944418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.944438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.260 [2024-07-12 17:02:11.950338] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:13:12.260 [2024-07-12 17:02:11.950415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100910 ] 00:13:12.260 [2024-07-12 17:02:11.952450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.260 [2024-07-12 17:02:11.952475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:11.960463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:11.960488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:11.968480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:11.968501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:11.976501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:11.976521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.519 [2024-07-12 17:02:11.984522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:11.984542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:11.992543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:11.992562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.000565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.000585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.008585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.008611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.010964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.519 [2024-07-12 17:02:12.016631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.016659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.024678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.024718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.032654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.032674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.040675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.040695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.048696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.048730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.056732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.056761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.064760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.064788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.072839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.072873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.080850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.080894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.088840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.088862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.096845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.096867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.104876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.104897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.112901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.112921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.120920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.120941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.123458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.519 [2024-07-12 17:02:12.128942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.128962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.136969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.136991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.145034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.145072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.153059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.153123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.161082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.161123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.169117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.169159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.177140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.177180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.185139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.185178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.193126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.193147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.201169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.201206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.519 [2024-07-12 17:02:12.209224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.519 [2024-07-12 17:02:12.209270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.217227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.217265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.225214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.225236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.233233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.233253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.241255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.241275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.249285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.249309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.257307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.257329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.265330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.265351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.273353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.273375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.281375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.281396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.289397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.289418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.297416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.297436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.305443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.305476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 [2024-07-12 17:02:12.313461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.313483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.794 Running I/O for 5 seconds... 00:13:12.794 [2024-07-12 17:02:12.321480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.794 [2024-07-12 17:02:12.321500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.335710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.335758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.346115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.346140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.356238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.356262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.366356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.366380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.376656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.376680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.387028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.387053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.398602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.398625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.407606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.407630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.417800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.417825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.427953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.427979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.437769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.437795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.447715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.447765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.457618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.457642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.467353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.467377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.795 [2024-07-12 17:02:12.477908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.795 [2024-07-12 17:02:12.477934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.490440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.490471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.499369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.499401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.511424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.511448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.520906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.520932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.530792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.530818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.540756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.540782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.550679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.550703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.560526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.560550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.570604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.570628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.580576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.580600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.590279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.590303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.600036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.600060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.609964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.609990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.619514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.619537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.629427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.629451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.639067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.639105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.648795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.648821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.658618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.658642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.668316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.668339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.678558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.678582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.688532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.688557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.698453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.698476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.708210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.708234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.718095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.718119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.728330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.728354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.740263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.057 [2024-07-12 17:02:12.740287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.057 [2024-07-12 17:02:12.749835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.058 [2024-07-12 17:02:12.749861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.315 [2024-07-12 17:02:12.761590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.315 [2024-07-12 17:02:12.761615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.315 [2024-07-12 17:02:12.772094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.315 [2024-07-12 17:02:12.772118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.315 [2024-07-12 17:02:12.782624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.315 [2024-07-12 17:02:12.782648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.795141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.795165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.806621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.806645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.815988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.816013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.826770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.826810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.838627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.838651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.847757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.847782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.858972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.858998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.871121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.871146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.880632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.880656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.891124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.891148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.902771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.902796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.912282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.912306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.922491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.922515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.932877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.932909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.943506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.943530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.953788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.953814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.965745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.965770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.975191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.975215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.985293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.985318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:12.995631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:12.995656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.316 [2024-07-12 17:02:13.008458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.316 [2024-07-12 17:02:13.008485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.018588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.018613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.028855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.028881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.038964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.038990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.048847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.048873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.059358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.059383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.071381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.071405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.081031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.081057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.091352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.091377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.102080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.102106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.573 [2024-07-12 17:02:13.114435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.573 [2024-07-12 17:02:13.114460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.126086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.126125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.134924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.134951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.147149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.147173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.156765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.156816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.166764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.166804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.176688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.176712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.187373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.187398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.199475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.199500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.211436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.211469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.220397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.220421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.231610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.231635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.243652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.243676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.254330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.254353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.574 [2024-07-12 17:02:13.263060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.574 [2024-07-12 17:02:13.263085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.274658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.274683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.286391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.286415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.295757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.295796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.305962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.305987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.316247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.316271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.326756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.326796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.337139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.337164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.349599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.349623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.358951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.358977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.370998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.371039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.380841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.380866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.391041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.391065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.401399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.401422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.413286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.413310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.831 [2024-07-12 17:02:13.423226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.831 [2024-07-12 17:02:13.423251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.433414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.433438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.443482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.443506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.454488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.454512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.465092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.465117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.475577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.475601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.487452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.487482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.497573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.497597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.507535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.507559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.832 [2024-07-12 17:02:13.517240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.832 [2024-07-12 17:02:13.517264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.528397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.528422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.540663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.540688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.550438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.550461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.561287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.561311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.573107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.573131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.582590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.582614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.593541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.593566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.605943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.605968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.617709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.617756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.626777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.626817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.637353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.637377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.648159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.648183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.660555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.660579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.670184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.670208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.680816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.680841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.691132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.691163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.703384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.703408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.713389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.713413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.723808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.723835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.734241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.734266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.746411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.746435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.755826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.755852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.766167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.766193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.090 [2024-07-12 17:02:13.776781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.090 [2024-07-12 17:02:13.776822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.348 [2024-07-12 17:02:13.788554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.348 [2024-07-12 17:02:13.788580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.348 [2024-07-12 17:02:13.799061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.348 [2024-07-12 17:02:13.799103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.348 [2024-07-12 17:02:13.811285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.348 [2024-07-12 17:02:13.811310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.348 [2024-07-12 17:02:13.821992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.348 [2024-07-12 17:02:13.822033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.833172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.833197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.844330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.844354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.855171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.855196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.865915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.865943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.877795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.877822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.887247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.887272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.898275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.898307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.909295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.909320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.920147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.920172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.932328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.932353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.942768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.942809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.953553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.953578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.964249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.964274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.975119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.975144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.987715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.987766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:13.998220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:13.998244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:14.008575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:14.008600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:14.019420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:14.019444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:14.030402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:14.030427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.349 [2024-07-12 17:02:14.041491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.349 [2024-07-12 17:02:14.041517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.607 [2024-07-12 17:02:14.052803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.607 [2024-07-12 17:02:14.052831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.065694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.065733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.077801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.077829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.088043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.088069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.098904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.098932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.111533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.111567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.123253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.123278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.132927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.132955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.144106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.144133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.154669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.154702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.165369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.165406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.177279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.177305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.186803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.186830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.197462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.197488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.209457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.209482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.219054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.219080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.230158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.230184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.240707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.240754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.251594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.251619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.263419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.263443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.273244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.273268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.283931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.283957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.608 [2024-07-12 17:02:14.293483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.608 [2024-07-12 17:02:14.293507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.304173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.304214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.316415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.316440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.325974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.326005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.336333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.336358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.348368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.348392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.357792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.357818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.369244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.369269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.378712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.378758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.389149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.389187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.401696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.401735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.411596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.411620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.421998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.422037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.432148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.432172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.442187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.442212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.452561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.452585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.462753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.462779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.472978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.473005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.483324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.483348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.494220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.494245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.504456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.504480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.514834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.514860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.525261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.525285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.535907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.535933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.547573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.547597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.867 [2024-07-12 17:02:14.557446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.867 [2024-07-12 17:02:14.557470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.568886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.568913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.581934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.581960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.594030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.594055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.604122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.604146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.614242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.614266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.625162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.625186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.636180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.636204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.647268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.647292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.657960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.657986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.668521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.668545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.680801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.680826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.690091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.690115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.700995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.701034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.712563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.712588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.722326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.722350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.733067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.733107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.743498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.743522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.756000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.756041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.766381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.766406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.776559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.776583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.786866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.786891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.797145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.797169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.807181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.807205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.126 [2024-07-12 17:02:14.817793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.126 [2024-07-12 17:02:14.817820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.830559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.830584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.840643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.840667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.850610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.850634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.860646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.860670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.871211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.871235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.884432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.884456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.894712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.894759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.905003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.905042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.915300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.915324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.925435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.925459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.935626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.935650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.946875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.946902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.957313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.957337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.969291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.969316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.980828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.980853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:14.990687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:14.990711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.002483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.002507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.013041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.013066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.025807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.025833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.035655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.035679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.046109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.046133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.056352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.056376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.066457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.066480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.384 [2024-07-12 17:02:15.077117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.384 [2024-07-12 17:02:15.077143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.087335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.087359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.097309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.097333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.107987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.108013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.118208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.118239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.130376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.130400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.139922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.139948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.150691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.150715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.161054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.161092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.171578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.171602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.185123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.185147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.196341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.196365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.205649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.205672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.216496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.216520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.226734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.226769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.237161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.237185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.249675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.249700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.259587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.259611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.269674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.269699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.282668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.282694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.292771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.292812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.302977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.303002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.313143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.313167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.323221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.323254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.642 [2024-07-12 17:02:15.333750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.642 [2024-07-12 17:02:15.333776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.344157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.344181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.354293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.354317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.367536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.367560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.378847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.378872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.388177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.388200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.397876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.899 [2024-07-12 17:02:15.397901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.899 [2024-07-12 17:02:15.407683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.407706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.417948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.417974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.428311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.428336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.438339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.438363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.448111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.448135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.457952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.457977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.468193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.468217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.478170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.478196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.487987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.488027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.498184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.498208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.508278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.508303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.518466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.518497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.528520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.528545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.538486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.538510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.548359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.548383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.558311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.558335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.568007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.568046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.579806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.579832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.900 [2024-07-12 17:02:15.589866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.900 [2024-07-12 17:02:15.589891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.600523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.600547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.612770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.612809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.621588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.621612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.635116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.635140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.644869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.644894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.654537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.654560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.664862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.664888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.675389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.675413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.685439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.685462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.695734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.695769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.706036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.706061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.716443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.716474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.729421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.729446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.739408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.739432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.157 [2024-07-12 17:02:15.750000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.157 [2024-07-12 17:02:15.750041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.762697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.762736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.772298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.772322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.782935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.782961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.793213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.793238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.803361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.803385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.813476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.813500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.824281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.824305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.836460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.836485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.158 [2024-07-12 17:02:15.845849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.158 [2024-07-12 17:02:15.845875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.856577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.856601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.867199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.867223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.879382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.879405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.889376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.889401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.899995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.900035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.910384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.910408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.920252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.920281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.930736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.930769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.941189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.941213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.953339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.953363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.963263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.963287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.973453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.973477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.985274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.985298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:15.994381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:15.994405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.005183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.005207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.015916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.015942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.026267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.026291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.039575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.039600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.049392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.049416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.059625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.059649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.069936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.069961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.080283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.080307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.090568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.090592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.429 [2024-07-12 17:02:16.102420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.429 [2024-07-12 17:02:16.102444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.112415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.112447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.126550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.126585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.138535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.138561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.147943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.147969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.158260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.158284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.168487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.168511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.178512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.178536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.188623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.188647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.198444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.198468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.208772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.208797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.219105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.219129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.229495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.229519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.239872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.239898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.250242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.250266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.260754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.260780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.273287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.273311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.282659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.282682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.292641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.292666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.302591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.302615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.312393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.312416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.322802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.322827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.334657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.334681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.343967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.343992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.354577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.354601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.366362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.366386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.375730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.375765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.385921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.385947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.396206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.396231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.406368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.406394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.750 [2024-07-12 17:02:16.416282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.750 [2024-07-12 17:02:16.416306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.427049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.427076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.439156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.439180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.449554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.449581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.459898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.459926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.469879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.469906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.480056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.480096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.490000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.490040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.500462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.500487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.513908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.513935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.525573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.525597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.534392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.534416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.545698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.030 [2024-07-12 17:02:16.545745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.030 [2024-07-12 17:02:16.558040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.558065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.569253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.569277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.586253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.586278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.596227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.596251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.606680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.606705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.619511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.619534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.629503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.629527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.639601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.639624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.649357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.649381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.661194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.661218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.671476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.671500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.682197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.682236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.695003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.695043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.704872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.704897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.031 [2024-07-12 17:02:16.715656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.031 [2024-07-12 17:02:16.715680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.728240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.728271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.737819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.737845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.748128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.748152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.758237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.758260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.768390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.768413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.778371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.778395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.788293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.788317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.798652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.798676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.810946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.810971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.820617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.820641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.830587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.830611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.840556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.840580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.850949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.850974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.860956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.860981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.871216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.871240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.881770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.881809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.894334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.894358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.905205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.905229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.914588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.914611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.925552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.925583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.937632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.937656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.949045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.949070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.958187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.958211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.968202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.968227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.289 [2024-07-12 17:02:16.978176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.289 [2024-07-12 17:02:16.978200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:16.989207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:16.989232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.001377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.001401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.011374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.011398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.021587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.021610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.031669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.031693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.041694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.041734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.052787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.052812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.062176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.062200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.071964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.071990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.081982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.082008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.091603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.091627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.101609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.101633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.111629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.111653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.121430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.121459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.131496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.131520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.141675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.141699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.154885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.154911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.164984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.165012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.174967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.174994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.185197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.185220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.195180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.195204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.205217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.205241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.215233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.215257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.225433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.225457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.548 [2024-07-12 17:02:17.235728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.548 [2024-07-12 17:02:17.235762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.247000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.247041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.259292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.259317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.269248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.269272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.279734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.279769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.290682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.290707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.301409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.301433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.313178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.313201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.322512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.322543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.333324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.333348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 [2024-07-12 17:02:17.340658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.806 [2024-07-12 17:02:17.340681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.806 00:13:17.806 Latency(us) 00:13:17.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.806 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:17.806 Nvme1n1 : 5.01 12325.57 96.29 0.00 0.00 10371.77 4369.07 23301.69 00:13:17.807 =================================================================================================================== 00:13:17.807 Total : 12325.57 96.29 0.00 0.00 10371.77 4369.07 23301.69 00:13:17.807 [2024-07-12 17:02:17.348681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.348704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.356696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.356732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.364768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.364805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.372829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.372883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.380855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.380908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.388858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.388909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.396876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.396928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.404911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.404964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.412925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.412977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.420948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.421001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.428967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.429020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.436993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.437047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.445015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.445067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.453032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.453083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.461056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.461106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.469072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.469124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.477095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.477148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.485073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.485110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.807 [2024-07-12 17:02:17.493103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.807 [2024-07-12 17:02:17.493123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.501101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.501120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.509121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.509141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.517171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.517204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.525227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.525274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.533251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.533311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.541209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.541241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.549242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.549262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.557262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.557282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.065 [2024-07-12 17:02:17.565286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.065 [2024-07-12 17:02:17.565306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 [2024-07-12 17:02:17.573371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.066 [2024-07-12 17:02:17.573417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 [2024-07-12 17:02:17.581401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.066 [2024-07-12 17:02:17.581449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 [2024-07-12 17:02:17.589361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.066 [2024-07-12 17:02:17.589385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 [2024-07-12 17:02:17.597371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.066 [2024-07-12 17:02:17.597391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 [2024-07-12 17:02:17.605393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.066 [2024-07-12 17:02:17.605412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1100910) - No such process 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1100910 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 delay0 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 17:02:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:18.066 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.066 [2024-07-12 17:02:17.680425] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:26.190 [2024-07-12 17:02:24.543437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91c150 is same with the state(5) to be set 00:13:26.190 Initializing NVMe Controllers 00:13:26.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.190 Initialization complete. Launching workers. 00:13:26.190 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 15105 00:13:26.190 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15282, failed to submit 89 00:13:26.190 success 15196, unsuccess 86, failed 0 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.190 rmmod nvme_tcp 00:13:26.190 rmmod nvme_fabrics 00:13:26.190 rmmod nvme_keyring 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:26.190 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1099629 ']' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1099629 ']' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1099629' 00:13:26.191 killing process with pid 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1099629 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.191 17:02:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.570 17:02:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.570 00:13:27.570 real 0m28.511s 00:13:27.570 user 0m40.332s 00:13:27.570 sys 0m10.325s 00:13:27.570 17:02:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.570 17:02:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.570 ************************************ 00:13:27.570 END TEST nvmf_zcopy 00:13:27.570 ************************************ 00:13:27.570 17:02:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.570 17:02:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:27.570 17:02:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.570 17:02:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.570 17:02:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.570 ************************************ 00:13:27.570 START TEST nvmf_nmic 00:13:27.570 ************************************ 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:27.570 * Looking for test storage... 00:13:27.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.570 17:02:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.571 17:02:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:29.474 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:29.474 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.474 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:29.475 Found net devices under 0000:84:00.0: cvl_0_0 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:29.475 Found net devices under 0000:84:00.1: cvl_0_1 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.475 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:13:29.734 00:13:29.734 --- 10.0.0.2 ping statistics --- 00:13:29.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.734 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:29.734 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:13:29.734 00:13:29.735 --- 10.0.0.1 ping statistics --- 00:13:29.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.735 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1104376 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1104376 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1104376 ']' 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.735 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:29.735 [2024-07-12 17:02:29.379630] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:13:29.735 [2024-07-12 17:02:29.379716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.735 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.993 [2024-07-12 17:02:29.447478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.993 [2024-07-12 17:02:29.560171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.993 [2024-07-12 17:02:29.560233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.993 [2024-07-12 17:02:29.560262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.993 [2024-07-12 17:02:29.560274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.993 [2024-07-12 17:02:29.560284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.993 [2024-07-12 17:02:29.560355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.993 [2024-07-12 17:02:29.560415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.993 [2024-07-12 17:02:29.560482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.993 [2024-07-12 17:02:29.560479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 [2024-07-12 17:02:29.721687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 Malloc0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 [2024-07-12 17:02:29.775383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:30.251 test case1: single bdev can't be used in multiple subsystems 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.251 [2024-07-12 17:02:29.799250] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:30.251 [2024-07-12 17:02:29.799284] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:30.251 [2024-07-12 17:02:29.799315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.251 request: 00:13:30.251 { 00:13:30.251 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:30.251 "namespace": { 00:13:30.251 "bdev_name": "Malloc0", 00:13:30.251 "no_auto_visible": false 00:13:30.251 }, 00:13:30.251 "method": "nvmf_subsystem_add_ns", 00:13:30.251 "req_id": 1 00:13:30.251 } 00:13:30.251 Got JSON-RPC error response 00:13:30.251 response: 00:13:30.251 { 00:13:30.251 "code": -32602, 00:13:30.251 "message": "Invalid parameters" 00:13:30.251 } 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:30.251 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:30.252 Adding namespace failed - expected result. 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:30.252 test case2: host connect to nvmf target in multiple paths 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.252 [2024-07-12 17:02:29.807357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.252 17:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.817 17:02:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:31.382 17:02:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.382 17:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.382 17:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.382 17:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:31.382 17:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:33.907 17:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:33.907 17:02:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:33.907 [global] 00:13:33.907 thread=1 00:13:33.907 invalidate=1 00:13:33.907 rw=write 00:13:33.907 time_based=1 00:13:33.907 runtime=1 00:13:33.907 ioengine=libaio 00:13:33.907 direct=1 00:13:33.907 bs=4096 00:13:33.907 iodepth=1 00:13:33.907 norandommap=0 00:13:33.907 numjobs=1 00:13:33.907 00:13:33.907 verify_dump=1 00:13:33.907 verify_backlog=512 00:13:33.907 verify_state_save=0 00:13:33.907 do_verify=1 00:13:33.907 verify=crc32c-intel 00:13:33.907 [job0] 00:13:33.907 filename=/dev/nvme0n1 00:13:33.907 Could not set queue depth (nvme0n1) 00:13:33.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:33.908 fio-3.35 00:13:33.908 Starting 1 thread 00:13:34.839 00:13:34.839 job0: (groupid=0, jobs=1): err= 0: pid=1104886: Fri Jul 12 17:02:34 2024 00:13:34.839 read: IOPS=584, BW=2337KiB/s (2393kB/s)(2384KiB/1020msec) 00:13:34.839 slat (nsec): min=4670, max=41833, avg=10309.05, stdev=6923.98 00:13:34.839 clat (usec): min=189, max=41166, avg=1415.35, stdev=6785.42 00:13:34.839 lat (usec): min=194, max=41183, avg=1425.66, stdev=6788.37 00:13:34.839 clat percentiles (usec): 00:13:34.839 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:13:34.839 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 262], 00:13:34.839 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 363], 00:13:34.839 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:34.839 | 99.99th=[41157] 00:13:34.839 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:13:34.839 slat (nsec): min=6003, max=44352, avg=8931.15, stdev=3928.90 00:13:34.839 clat (usec): min=125, max=387, avg=152.33, stdev=18.17 00:13:34.839 lat (usec): min=133, max=416, avg=161.26, stdev=19.68 00:13:34.839 clat percentiles (usec): 00:13:34.839 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:13:34.839 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:13:34.839 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:13:34.839 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 388], 00:13:34.839 | 99.99th=[ 388] 00:13:34.839 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:34.839 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:34.839 lat (usec) : 250=81.91%, 500=17.04% 00:13:34.839 lat (msec) : 50=1.05% 00:13:34.839 cpu : usr=0.88%, sys=1.37%, ctx=1620, majf=0, minf=2 00:13:34.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.839 issued rwts: total=596,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.839 00:13:34.839 Run status group 0 (all jobs): 00:13:34.839 READ: bw=2337KiB/s (2393kB/s), 2337KiB/s-2337KiB/s (2393kB/s-2393kB/s), io=2384KiB (2441kB), run=1020-1020msec 00:13:34.839 WRITE: bw=4016KiB/s (4112kB/s), 4016KiB/s-4016KiB/s (4112kB/s-4112kB/s), io=4096KiB (4194kB), run=1020-1020msec 00:13:34.839 00:13:34.839 Disk stats (read/write): 00:13:34.839 nvme0n1: ios=643/1024, merge=0/0, ticks=743/156, in_queue=899, util=91.48% 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.839 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.096 rmmod nvme_tcp 00:13:35.097 rmmod nvme_fabrics 00:13:35.097 rmmod nvme_keyring 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1104376 ']' 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1104376 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1104376 ']' 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1104376 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1104376 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1104376' 00:13:35.097 killing process with pid 1104376 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1104376 00:13:35.097 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1104376 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.355 17:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.256 17:02:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.256 00:13:37.256 real 0m9.903s 00:13:37.256 user 0m21.961s 00:13:37.256 sys 0m2.366s 00:13:37.256 17:02:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.256 17:02:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.256 ************************************ 00:13:37.256 END TEST nvmf_nmic 00:13:37.256 ************************************ 00:13:37.256 17:02:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:37.256 17:02:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:37.256 17:02:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.256 17:02:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.256 17:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.514 ************************************ 00:13:37.514 START TEST nvmf_fio_target 00:13:37.514 ************************************ 00:13:37.514 17:02:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:37.514 * Looking for test storage... 00:13:37.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.514 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.515 17:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:40.046 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:40.046 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.046 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:40.047 Found net devices under 0000:84:00.0: cvl_0_0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:40.047 Found net devices under 0000:84:00.1: cvl_0_1 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:13:40.047 00:13:40.047 --- 10.0.0.2 ping statistics --- 00:13:40.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.047 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:13:40.047 00:13:40.047 --- 10.0.0.1 ping statistics --- 00:13:40.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.047 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1107090 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1107090 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1107090 ']' 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.047 [2024-07-12 17:02:39.387451] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:13:40.047 [2024-07-12 17:02:39.387530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.047 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.047 [2024-07-12 17:02:39.450146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.047 [2024-07-12 17:02:39.554570] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.047 [2024-07-12 17:02:39.554621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.047 [2024-07-12 17:02:39.554644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.047 [2024-07-12 17:02:39.554655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.047 [2024-07-12 17:02:39.554664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.047 [2024-07-12 17:02:39.554801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.047 [2024-07-12 17:02:39.554830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.047 [2024-07-12 17:02:39.554878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.047 [2024-07-12 17:02:39.554881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.047 17:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:40.304 [2024-07-12 17:02:39.977270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.562 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.818 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:40.818 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.075 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:41.075 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.332 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:41.332 17:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.590 17:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:41.590 17:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:41.847 17:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.105 17:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:42.105 17:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.364 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:42.364 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.622 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:42.622 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:43.186 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:43.443 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:43.443 17:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.699 17:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:43.699 17:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:43.699 17:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.265 [2024-07-12 17:02:43.654250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.265 17:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:44.265 17:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:44.523 17:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:45.457 17:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:47.426 17:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:47.426 [global] 00:13:47.426 thread=1 00:13:47.426 invalidate=1 00:13:47.426 rw=write 00:13:47.426 time_based=1 00:13:47.426 runtime=1 00:13:47.426 ioengine=libaio 00:13:47.426 direct=1 00:13:47.426 bs=4096 00:13:47.426 iodepth=1 00:13:47.426 norandommap=0 00:13:47.426 numjobs=1 00:13:47.426 00:13:47.426 verify_dump=1 00:13:47.426 verify_backlog=512 00:13:47.426 verify_state_save=0 00:13:47.426 do_verify=1 00:13:47.426 verify=crc32c-intel 00:13:47.426 [job0] 00:13:47.427 filename=/dev/nvme0n1 00:13:47.427 [job1] 00:13:47.427 filename=/dev/nvme0n2 00:13:47.427 [job2] 00:13:47.427 filename=/dev/nvme0n3 00:13:47.427 [job3] 00:13:47.427 filename=/dev/nvme0n4 00:13:47.427 Could not set queue depth (nvme0n1) 00:13:47.427 Could not set queue depth (nvme0n2) 00:13:47.427 Could not set queue depth (nvme0n3) 00:13:47.427 Could not set queue depth (nvme0n4) 00:13:47.685 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.685 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.685 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.685 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.685 fio-3.35 00:13:47.685 Starting 4 threads 00:13:49.059 00:13:49.059 job0: (groupid=0, jobs=1): err= 0: pid=1108067: Fri Jul 12 17:02:48 2024 00:13:49.059 read: IOPS=23, BW=93.0KiB/s (95.3kB/s)(96.0KiB/1032msec) 00:13:49.059 slat (nsec): min=10959, max=43243, avg=19231.00, stdev=8560.80 00:13:49.059 clat (usec): min=291, max=41958, avg=37289.44, stdev=11591.18 00:13:49.059 lat (usec): min=310, max=41991, avg=37308.67, stdev=11587.16 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 293], 5.00th=[ 461], 10.00th=[30278], 20.00th=[41157], 00:13:49.059 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:49.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:49.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:49.059 | 99.99th=[42206] 00:13:49.059 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:49.059 slat (nsec): min=7128, max=94354, avg=18734.95, stdev=12013.65 00:13:49.059 clat (usec): min=147, max=547, avg=242.90, stdev=50.77 00:13:49.059 lat (usec): min=194, max=577, avg=261.64, stdev=55.37 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 206], 00:13:49.059 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 239], 00:13:49.059 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 334], 00:13:49.059 | 99.00th=[ 404], 99.50th=[ 490], 99.90th=[ 545], 99.95th=[ 545], 00:13:49.059 | 99.99th=[ 545] 00:13:49.059 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:13:49.059 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:49.059 lat (usec) : 250=62.87%, 500=32.65%, 750=0.37% 00:13:49.059 lat (msec) : 50=4.10% 00:13:49.059 cpu : usr=0.39%, sys=0.97%, ctx=537, majf=0, minf=2 00:13:49.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.059 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.059 job1: (groupid=0, jobs=1): err= 0: pid=1108083: Fri Jul 12 17:02:48 2024 00:13:49.059 read: IOPS=38, BW=154KiB/s (158kB/s)(160KiB/1038msec) 00:13:49.059 slat (nsec): min=7197, max=36149, avg=16056.02, stdev=8366.90 00:13:49.059 clat (usec): min=253, max=42000, avg=22691.98, stdev=20494.72 00:13:49.059 lat (usec): min=261, max=42016, avg=22708.03, stdev=20498.68 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 277], 20.00th=[ 302], 00:13:49.059 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[40633], 60.00th=[41157], 00:13:49.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:49.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:49.059 | 99.99th=[42206] 00:13:49.059 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:49.059 slat (usec): min=8, max=1013, avg=19.16, stdev=44.72 00:13:49.059 clat (usec): min=137, max=461, avg=229.65, stdev=50.62 00:13:49.059 lat (usec): min=147, max=1200, avg=248.81, stdev=68.56 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 151], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 200], 00:13:49.059 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 225], 00:13:49.059 | 70.00th=[ 237], 80.00th=[ 260], 90.00th=[ 302], 95.00th=[ 338], 00:13:49.059 | 99.00th=[ 388], 99.50th=[ 445], 99.90th=[ 461], 99.95th=[ 461], 00:13:49.059 | 99.99th=[ 461] 00:13:49.059 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:13:49.059 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:49.059 lat (usec) : 250=70.47%, 500=25.36%, 750=0.18% 00:13:49.059 lat (msec) : 50=3.99% 00:13:49.059 cpu : usr=0.39%, sys=0.87%, ctx=554, majf=0, minf=1 00:13:49.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.059 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.059 job2: (groupid=0, jobs=1): err= 0: pid=1108121: Fri Jul 12 17:02:48 2024 00:13:49.059 read: IOPS=2012, BW=8052KiB/s (8245kB/s)(8060KiB/1001msec) 00:13:49.059 slat (nsec): min=6226, max=66297, avg=11222.04, stdev=5947.02 00:13:49.059 clat (usec): min=175, max=1130, avg=255.24, stdev=62.22 00:13:49.059 lat (usec): min=182, max=1137, avg=266.46, stdev=66.28 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:13:49.059 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 243], 60.00th=[ 265], 00:13:49.059 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 355], 00:13:49.059 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 627], 00:13:49.059 | 99.99th=[ 1123] 00:13:49.059 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:49.059 slat (nsec): min=7404, max=56044, avg=13699.23, stdev=7612.01 00:13:49.059 clat (usec): min=131, max=971, avg=205.34, stdev=68.38 00:13:49.059 lat (usec): min=139, max=1000, avg=219.04, stdev=73.70 00:13:49.059 clat percentiles (usec): 00:13:49.059 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:13:49.059 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 184], 60.00th=[ 208], 00:13:49.059 | 70.00th=[ 231], 80.00th=[ 262], 90.00th=[ 306], 95.00th=[ 338], 00:13:49.059 | 99.00th=[ 388], 99.50th=[ 429], 99.90th=[ 519], 99.95th=[ 578], 00:13:49.060 | 99.99th=[ 971] 00:13:49.060 bw ( KiB/s): min= 8192, max= 8192, per=51.90%, avg=8192.00, stdev= 0.00, samples=1 00:13:49.060 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:49.060 lat (usec) : 250=66.03%, 500=33.65%, 750=0.27%, 1000=0.02% 00:13:49.060 lat (msec) : 2=0.02% 00:13:49.060 cpu : usr=4.00%, sys=5.90%, ctx=4063, majf=0, minf=1 00:13:49.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.060 issued rwts: total=2015,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.060 job3: (groupid=0, jobs=1): err= 0: pid=1108132: Fri Jul 12 17:02:48 2024 00:13:49.060 read: IOPS=981, BW=3927KiB/s (4021kB/s)(4076KiB/1038msec) 00:13:49.060 slat (usec): min=6, max=102, avg=17.60, stdev=12.33 00:13:49.060 clat (usec): min=198, max=41972, avg=755.17, stdev=4209.54 00:13:49.060 lat (usec): min=205, max=41992, avg=772.77, stdev=4209.49 00:13:49.060 clat percentiles (usec): 00:13:49.060 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 237], 00:13:49.060 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 281], 60.00th=[ 314], 00:13:49.060 | 70.00th=[ 371], 80.00th=[ 416], 90.00th=[ 469], 95.00th=[ 498], 00:13:49.060 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:13:49.060 | 99.99th=[42206] 00:13:49.060 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:13:49.060 slat (usec): min=7, max=114, avg=16.62, stdev=12.48 00:13:49.060 clat (usec): min=143, max=629, avg=218.10, stdev=61.42 00:13:49.060 lat (usec): min=152, max=669, avg=234.72, stdev=66.74 00:13:49.060 clat percentiles (usec): 00:13:49.060 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:13:49.060 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 215], 00:13:49.060 | 70.00th=[ 223], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 343], 00:13:49.060 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 627], 99.95th=[ 627], 00:13:49.060 | 99.99th=[ 627] 00:13:49.060 bw ( KiB/s): min= 8192, max= 8192, per=51.90%, avg=8192.00, stdev= 0.00, samples=1 00:13:49.060 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:49.060 lat (usec) : 250=58.84%, 500=38.67%, 750=1.96% 00:13:49.060 lat (msec) : 50=0.54% 00:13:49.060 cpu : usr=1.16%, sys=3.86%, ctx=2044, majf=0, minf=1 00:13:49.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.060 issued rwts: total=1019,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:49.060 00:13:49.060 Run status group 0 (all jobs): 00:13:49.060 READ: bw=11.7MiB/s (12.2MB/s), 93.0KiB/s-8052KiB/s (95.3kB/s-8245kB/s), io=12.1MiB (12.7MB), run=1001-1038msec 00:13:49.060 WRITE: bw=15.4MiB/s (16.2MB/s), 1973KiB/s-8184KiB/s (2020kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1038msec 00:13:49.060 00:13:49.060 Disk stats (read/write): 00:13:49.060 nvme0n1: ios=45/512, merge=0/0, ticks=1677/129, in_queue=1806, util=97.19% 00:13:49.060 nvme0n2: ios=90/512, merge=0/0, ticks=919/112, in_queue=1031, util=97.45% 00:13:49.060 nvme0n3: ios=1536/1875, merge=0/0, ticks=372/381, in_queue=753, util=88.76% 00:13:49.060 nvme0n4: ios=1039/1024, merge=0/0, ticks=1510/228, in_queue=1738, util=97.45% 00:13:49.060 17:02:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:49.060 [global] 00:13:49.060 thread=1 00:13:49.060 invalidate=1 00:13:49.060 rw=randwrite 00:13:49.060 time_based=1 00:13:49.060 runtime=1 00:13:49.060 ioengine=libaio 00:13:49.060 direct=1 00:13:49.060 bs=4096 00:13:49.060 iodepth=1 00:13:49.060 norandommap=0 00:13:49.060 numjobs=1 00:13:49.060 00:13:49.060 verify_dump=1 00:13:49.060 verify_backlog=512 00:13:49.060 verify_state_save=0 00:13:49.060 do_verify=1 00:13:49.060 verify=crc32c-intel 00:13:49.060 [job0] 00:13:49.060 filename=/dev/nvme0n1 00:13:49.060 [job1] 00:13:49.060 filename=/dev/nvme0n2 00:13:49.060 [job2] 00:13:49.060 filename=/dev/nvme0n3 00:13:49.060 [job3] 00:13:49.060 filename=/dev/nvme0n4 00:13:49.060 Could not set queue depth (nvme0n1) 00:13:49.060 Could not set queue depth (nvme0n2) 00:13:49.060 Could not set queue depth (nvme0n3) 00:13:49.060 Could not set queue depth (nvme0n4) 00:13:49.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.060 fio-3.35 00:13:49.060 Starting 4 threads 00:13:50.433 00:13:50.433 job0: (groupid=0, jobs=1): err= 0: pid=1108404: Fri Jul 12 17:02:49 2024 00:13:50.433 read: IOPS=566, BW=2267KiB/s (2321kB/s)(2276KiB/1004msec) 00:13:50.433 slat (nsec): min=7255, max=50019, avg=17538.11, stdev=5761.68 00:13:50.433 clat (usec): min=209, max=41958, avg=1311.31, stdev=6327.21 00:13:50.433 lat (usec): min=219, max=41973, avg=1328.85, stdev=6326.96 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 225], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 281], 00:13:50.433 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:13:50.433 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 408], 00:13:50.433 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:50.433 | 99.99th=[42206] 00:13:50.433 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:13:50.433 slat (nsec): min=7821, max=58409, avg=18238.46, stdev=7956.98 00:13:50.433 clat (usec): min=149, max=770, avg=215.31, stdev=38.86 00:13:50.433 lat (usec): min=157, max=780, avg=233.55, stdev=42.46 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 186], 00:13:50.433 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:13:50.433 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 281], 00:13:50.433 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 367], 99.95th=[ 775], 00:13:50.433 | 99.99th=[ 775] 00:13:50.433 bw ( KiB/s): min= 1072, max= 7120, per=28.85%, avg=4096.00, stdev=4276.58, samples=2 00:13:50.433 iops : min= 268, max= 1780, avg=1024.00, stdev=1069.15, samples=2 00:13:50.433 lat (usec) : 250=60.14%, 500=38.67%, 750=0.25%, 1000=0.06% 00:13:50.433 lat (msec) : 50=0.88% 00:13:50.433 cpu : usr=1.69%, sys=3.99%, ctx=1595, majf=0, minf=1 00:13:50.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.433 issued rwts: total=569,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.433 job1: (groupid=0, jobs=1): err= 0: pid=1108405: Fri Jul 12 17:02:49 2024 00:13:50.433 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:50.433 slat (nsec): min=4610, max=70934, avg=12740.33, stdev=9279.42 00:13:50.433 clat (usec): min=177, max=41226, avg=433.62, stdev=2544.60 00:13:50.433 lat (usec): min=182, max=41236, avg=446.36, stdev=2545.25 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 212], 00:13:50.433 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 277], 00:13:50.433 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 371], 95.00th=[ 420], 00:13:50.433 | 99.00th=[ 578], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:13:50.433 | 99.99th=[41157] 00:13:50.433 write: IOPS=1613, BW=6454KiB/s (6608kB/s)(6460KiB/1001msec); 0 zone resets 00:13:50.433 slat (nsec): min=6400, max=51763, avg=10312.41, stdev=4530.64 00:13:50.433 clat (usec): min=123, max=881, avg=175.90, stdev=46.72 00:13:50.433 lat (usec): min=129, max=889, avg=186.21, stdev=48.79 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:13:50.433 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 172], 00:13:50.433 | 70.00th=[ 204], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 245], 00:13:50.433 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 396], 99.95th=[ 881], 00:13:50.433 | 99.99th=[ 881] 00:13:50.433 bw ( KiB/s): min= 4096, max= 4096, per=28.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:50.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:50.433 lat (usec) : 250=67.25%, 500=31.99%, 750=0.54%, 1000=0.03% 00:13:50.433 lat (msec) : 50=0.19% 00:13:50.433 cpu : usr=2.40%, sys=3.40%, ctx=3153, majf=0, minf=1 00:13:50.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.433 issued rwts: total=1536,1615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.433 job2: (groupid=0, jobs=1): err= 0: pid=1108406: Fri Jul 12 17:02:49 2024 00:13:50.433 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:13:50.433 slat (nsec): min=7143, max=34087, avg=22048.87, stdev=9977.87 00:13:50.433 clat (usec): min=260, max=41840, avg=39215.30, stdev=8494.25 00:13:50.433 lat (usec): min=274, max=41873, avg=39237.35, stdev=8496.14 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 262], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:50.433 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:50.433 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:50.433 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:50.433 | 99.99th=[41681] 00:13:50.433 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:13:50.433 slat (nsec): min=6878, max=75412, avg=10432.60, stdev=4788.79 00:13:50.433 clat (usec): min=150, max=396, avg=232.29, stdev=22.38 00:13:50.433 lat (usec): min=160, max=471, avg=242.72, stdev=23.58 00:13:50.433 clat percentiles (usec): 00:13:50.433 | 1.00th=[ 174], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:13:50.433 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:13:50.433 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:13:50.433 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 396], 99.95th=[ 396], 00:13:50.433 | 99.99th=[ 396] 00:13:50.433 bw ( KiB/s): min= 4096, max= 4096, per=28.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:50.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:50.433 lat (usec) : 250=79.44%, 500=16.45% 00:13:50.433 lat (msec) : 50=4.11% 00:13:50.433 cpu : usr=0.00%, sys=0.78%, ctx=535, majf=0, minf=2 00:13:50.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.434 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.434 job3: (groupid=0, jobs=1): err= 0: pid=1108407: Fri Jul 12 17:02:49 2024 00:13:50.434 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:13:50.434 slat (nsec): min=10876, max=62947, avg=22422.77, stdev=12035.91 00:13:50.434 clat (usec): min=40844, max=41767, avg=41009.75, stdev=176.36 00:13:50.434 lat (usec): min=40878, max=41830, avg=41032.17, stdev=184.90 00:13:50.434 clat percentiles (usec): 00:13:50.434 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:50.434 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:50.434 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:50.434 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:50.434 | 99.99th=[41681] 00:13:50.434 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:50.434 slat (nsec): min=7060, max=42745, avg=12037.52, stdev=5592.15 00:13:50.434 clat (usec): min=132, max=498, avg=235.85, stdev=39.21 00:13:50.434 lat (usec): min=159, max=506, avg=247.89, stdev=39.34 00:13:50.434 clat percentiles (usec): 00:13:50.434 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:13:50.434 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 235], 00:13:50.434 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 289], 00:13:50.434 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 498], 99.95th=[ 498], 00:13:50.434 | 99.99th=[ 498] 00:13:50.434 bw ( KiB/s): min= 4096, max= 4096, per=28.85%, avg=4096.00, stdev= 0.00, samples=1 00:13:50.434 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:50.434 lat (usec) : 250=82.02%, 500=13.86% 00:13:50.434 lat (msec) : 50=4.12% 00:13:50.434 cpu : usr=0.19%, sys=0.68%, ctx=534, majf=0, minf=1 00:13:50.434 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.434 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.434 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.434 00:13:50.434 Run status group 0 (all jobs): 00:13:50.434 READ: bw=8333KiB/s (8533kB/s), 85.3KiB/s-6138KiB/s (87.3kB/s-6285kB/s), io=8600KiB (8806kB), run=1001-1032msec 00:13:50.434 WRITE: bw=13.9MiB/s (14.5MB/s), 1984KiB/s-6454KiB/s (2032kB/s-6608kB/s), io=14.3MiB (15.0MB), run=1001-1032msec 00:13:50.434 00:13:50.434 Disk stats (read/write): 00:13:50.434 nvme0n1: ios=617/1024, merge=0/0, ticks=764/214, in_queue=978, util=97.80% 00:13:50.434 nvme0n2: ios=1074/1384, merge=0/0, ticks=1323/241, in_queue=1564, util=97.97% 00:13:50.434 nvme0n3: ios=44/512, merge=0/0, ticks=929/119, in_queue=1048, util=89.55% 00:13:50.434 nvme0n4: ios=17/512, merge=0/0, ticks=698/114, in_queue=812, util=89.57% 00:13:50.434 17:02:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:50.434 [global] 00:13:50.434 thread=1 00:13:50.434 invalidate=1 00:13:50.434 rw=write 00:13:50.434 time_based=1 00:13:50.434 runtime=1 00:13:50.434 ioengine=libaio 00:13:50.434 direct=1 00:13:50.434 bs=4096 00:13:50.434 iodepth=128 00:13:50.434 norandommap=0 00:13:50.434 numjobs=1 00:13:50.434 00:13:50.434 verify_dump=1 00:13:50.434 verify_backlog=512 00:13:50.434 verify_state_save=0 00:13:50.434 do_verify=1 00:13:50.434 verify=crc32c-intel 00:13:50.434 [job0] 00:13:50.434 filename=/dev/nvme0n1 00:13:50.434 [job1] 00:13:50.434 filename=/dev/nvme0n2 00:13:50.434 [job2] 00:13:50.434 filename=/dev/nvme0n3 00:13:50.434 [job3] 00:13:50.434 filename=/dev/nvme0n4 00:13:50.434 Could not set queue depth (nvme0n1) 00:13:50.434 Could not set queue depth (nvme0n2) 00:13:50.434 Could not set queue depth (nvme0n3) 00:13:50.434 Could not set queue depth (nvme0n4) 00:13:50.434 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.434 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.434 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.434 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.434 fio-3.35 00:13:50.434 Starting 4 threads 00:13:51.808 00:13:51.808 job0: (groupid=0, jobs=1): err= 0: pid=1108636: Fri Jul 12 17:02:51 2024 00:13:51.808 read: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(10.7MiB/1044msec) 00:13:51.808 slat (usec): min=2, max=16761, avg=165.05, stdev=943.44 00:13:51.808 clat (usec): min=4427, max=64489, avg=21230.93, stdev=10556.20 00:13:51.808 lat (usec): min=4434, max=64493, avg=21395.98, stdev=10621.00 00:13:51.808 clat percentiles (usec): 00:13:51.808 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[15401], 00:13:51.808 | 30.00th=[16188], 40.00th=[18220], 50.00th=[18744], 60.00th=[20055], 00:13:51.808 | 70.00th=[20841], 80.00th=[25560], 90.00th=[30802], 95.00th=[47973], 00:13:51.808 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:13:51.808 | 99.99th=[64750] 00:13:51.808 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:13:51.808 slat (usec): min=3, max=15036, avg=170.68, stdev=761.75 00:13:51.808 clat (usec): min=1339, max=71086, avg=24245.63, stdev=10783.13 00:13:51.808 lat (usec): min=1381, max=71095, avg=24416.31, stdev=10839.30 00:13:51.808 clat percentiles (usec): 00:13:51.808 | 1.00th=[ 4817], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12125], 00:13:51.808 | 30.00th=[16450], 40.00th=[22152], 50.00th=[24511], 60.00th=[26608], 00:13:51.808 | 70.00th=[30016], 80.00th=[33817], 90.00th=[38536], 95.00th=[42730], 00:13:51.808 | 99.00th=[45876], 99.50th=[46924], 99.90th=[67634], 99.95th=[67634], 00:13:51.808 | 99.99th=[70779] 00:13:51.808 bw ( KiB/s): min=10864, max=13712, per=19.32%, avg=12288.00, stdev=2013.84, samples=2 00:13:51.808 iops : min= 2716, max= 3428, avg=3072.00, stdev=503.46, samples=2 00:13:51.808 lat (msec) : 2=0.02%, 4=0.24%, 10=6.35%, 20=39.97%, 50=51.70% 00:13:51.808 lat (msec) : 100=1.72% 00:13:51.808 cpu : usr=2.88%, sys=3.93%, ctx=391, majf=0, minf=1 00:13:51.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:51.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.808 issued rwts: total=2727,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.808 job1: (groupid=0, jobs=1): err= 0: pid=1108637: Fri Jul 12 17:02:51 2024 00:13:51.808 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:13:51.808 slat (usec): min=3, max=12205, avg=145.05, stdev=797.96 00:13:51.808 clat (usec): min=8614, max=56345, avg=18315.51, stdev=8133.60 00:13:51.808 lat (usec): min=8626, max=56366, avg=18460.56, stdev=8212.51 00:13:51.808 clat percentiles (usec): 00:13:51.808 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10683], 20.00th=[11600], 00:13:51.808 | 30.00th=[14615], 40.00th=[15926], 50.00th=[16909], 60.00th=[18482], 00:13:51.808 | 70.00th=[19268], 80.00th=[21103], 90.00th=[25822], 95.00th=[40633], 00:13:51.808 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50594], 99.95th=[52167], 00:13:51.808 | 99.99th=[56361] 00:13:51.808 write: IOPS=3675, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1004msec); 0 zone resets 00:13:51.808 slat (usec): min=4, max=13140, avg=120.44, stdev=655.01 00:13:51.808 clat (usec): min=3448, max=50920, avg=16592.95, stdev=8289.04 00:13:51.808 lat (usec): min=4425, max=50935, avg=16713.39, stdev=8345.75 00:13:51.808 clat percentiles (usec): 00:13:51.808 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10945], 00:13:51.808 | 30.00th=[11076], 40.00th=[13698], 50.00th=[14353], 60.00th=[15008], 00:13:51.808 | 70.00th=[15664], 80.00th=[20841], 90.00th=[25035], 95.00th=[34866], 00:13:51.808 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:13:51.808 | 99.99th=[51119] 00:13:51.808 bw ( KiB/s): min=12288, max=16384, per=22.54%, avg=14336.00, stdev=2896.31, samples=2 00:13:51.808 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:13:51.808 lat (msec) : 4=0.01%, 10=5.43%, 20=71.62%, 50=22.70%, 100=0.23% 00:13:51.808 cpu : usr=4.39%, sys=7.08%, ctx=305, majf=0, minf=1 00:13:51.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:51.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.808 issued rwts: total=3584,3690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.808 job2: (groupid=0, jobs=1): err= 0: pid=1108638: Fri Jul 12 17:02:51 2024 00:13:51.808 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1008msec) 00:13:51.808 slat (usec): min=3, max=11474, avg=103.86, stdev=720.82 00:13:51.808 clat (usec): min=4207, max=25482, avg=13050.31, stdev=3254.73 00:13:51.808 lat (usec): min=4219, max=25490, avg=13154.17, stdev=3301.98 00:13:51.808 clat percentiles (usec): 00:13:51.808 | 1.00th=[ 5866], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[11469], 00:13:51.808 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:13:51.808 | 70.00th=[12518], 80.00th=[14615], 90.00th=[17957], 95.00th=[20579], 00:13:51.808 | 99.00th=[22938], 99.50th=[23987], 99.90th=[24773], 99.95th=[25560], 00:13:51.808 | 99.99th=[25560] 00:13:51.808 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:13:51.808 slat (usec): min=3, max=17677, avg=83.20, stdev=521.95 00:13:51.808 clat (usec): min=1883, max=25485, avg=11751.64, stdev=2411.25 00:13:51.808 lat (usec): min=2577, max=25493, avg=11834.83, stdev=2454.03 00:13:51.808 clat percentiles (usec): 00:13:51.809 | 1.00th=[ 4080], 5.00th=[ 6325], 10.00th=[ 8455], 20.00th=[10945], 00:13:51.809 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12649], 00:13:51.809 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:13:51.809 | 99.00th=[19006], 99.50th=[19530], 99.90th=[23725], 99.95th=[23987], 00:13:51.809 | 99.99th=[25560] 00:13:51.809 bw ( KiB/s): min=20480, max=20480, per=32.20%, avg=20480.00, stdev= 0.00, samples=2 00:13:51.809 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:51.809 lat (msec) : 2=0.01%, 4=0.48%, 10=10.87%, 20=85.55%, 50=3.09% 00:13:51.809 cpu : usr=5.86%, sys=9.14%, ctx=561, majf=0, minf=1 00:13:51.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:51.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.809 issued rwts: total=5104,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.809 job3: (groupid=0, jobs=1): err= 0: pid=1108639: Fri Jul 12 17:02:51 2024 00:13:51.809 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:13:51.809 slat (usec): min=2, max=10880, avg=101.12, stdev=750.49 00:13:51.809 clat (usec): min=4164, max=24126, avg=13004.24, stdev=2967.39 00:13:51.809 lat (usec): min=4168, max=24136, avg=13105.36, stdev=3047.17 00:13:51.809 clat percentiles (usec): 00:13:51.809 | 1.00th=[ 5997], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11600], 00:13:51.809 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:13:51.809 | 70.00th=[13698], 80.00th=[15795], 90.00th=[16909], 95.00th=[18744], 00:13:51.809 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:13:51.809 | 99.99th=[24249] 00:13:51.809 write: IOPS=4671, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1010msec); 0 zone resets 00:13:51.809 slat (usec): min=3, max=25201, avg=102.74, stdev=934.57 00:13:51.809 clat (usec): min=463, max=72490, avg=14392.90, stdev=8299.05 00:13:51.809 lat (usec): min=1742, max=72636, avg=14495.65, stdev=8396.81 00:13:51.809 clat percentiles (usec): 00:13:51.809 | 1.00th=[ 3163], 5.00th=[ 6063], 10.00th=[ 9241], 20.00th=[11076], 00:13:51.809 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:13:51.809 | 70.00th=[12911], 80.00th=[16319], 90.00th=[20055], 95.00th=[31589], 00:13:51.809 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[56886], 00:13:51.809 | 99.99th=[72877] 00:13:51.809 bw ( KiB/s): min=16384, max=20480, per=28.98%, avg=18432.00, stdev=2896.31, samples=2 00:13:51.809 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:13:51.809 lat (usec) : 500=0.01% 00:13:51.809 lat (msec) : 4=0.79%, 10=9.06%, 20=83.36%, 50=6.07%, 100=0.71% 00:13:51.809 cpu : usr=2.87%, sys=4.46%, ctx=367, majf=0, minf=1 00:13:51.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:51.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.809 issued rwts: total=4608,4718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.809 00:13:51.809 Run status group 0 (all jobs): 00:13:51.809 READ: bw=60.0MiB/s (62.9MB/s), 10.2MiB/s-19.8MiB/s (10.7MB/s-20.7MB/s), io=62.6MiB (65.6MB), run=1004-1044msec 00:13:51.809 WRITE: bw=62.1MiB/s (65.1MB/s), 11.5MiB/s-19.8MiB/s (12.1MB/s-20.8MB/s), io=64.8MiB (68.0MB), run=1004-1044msec 00:13:51.809 00:13:51.809 Disk stats (read/write): 00:13:51.809 nvme0n1: ios=2602/2560, merge=0/0, ticks=26396/33894, in_queue=60290, util=91.18% 00:13:51.809 nvme0n2: ios=3047/3072, merge=0/0, ticks=19498/15217, in_queue=34715, util=97.86% 00:13:51.809 nvme0n3: ios=4121/4535, merge=0/0, ticks=50719/51134, in_queue=101853, util=90.60% 00:13:51.809 nvme0n4: ios=3584/4087, merge=0/0, ticks=38402/45579, in_queue=83981, util=89.58% 00:13:51.809 17:02:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:51.809 [global] 00:13:51.809 thread=1 00:13:51.809 invalidate=1 00:13:51.809 rw=randwrite 00:13:51.809 time_based=1 00:13:51.809 runtime=1 00:13:51.809 ioengine=libaio 00:13:51.809 direct=1 00:13:51.809 bs=4096 00:13:51.809 iodepth=128 00:13:51.809 norandommap=0 00:13:51.809 numjobs=1 00:13:51.809 00:13:51.809 verify_dump=1 00:13:51.809 verify_backlog=512 00:13:51.809 verify_state_save=0 00:13:51.809 do_verify=1 00:13:51.809 verify=crc32c-intel 00:13:51.809 [job0] 00:13:51.809 filename=/dev/nvme0n1 00:13:51.809 [job1] 00:13:51.809 filename=/dev/nvme0n2 00:13:51.809 [job2] 00:13:51.809 filename=/dev/nvme0n3 00:13:51.809 [job3] 00:13:51.809 filename=/dev/nvme0n4 00:13:51.809 Could not set queue depth (nvme0n1) 00:13:51.809 Could not set queue depth (nvme0n2) 00:13:51.809 Could not set queue depth (nvme0n3) 00:13:51.809 Could not set queue depth (nvme0n4) 00:13:52.066 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:52.066 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:52.066 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:52.066 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:52.066 fio-3.35 00:13:52.066 Starting 4 threads 00:13:53.441 00:13:53.441 job0: (groupid=0, jobs=1): err= 0: pid=1108865: Fri Jul 12 17:02:52 2024 00:13:53.441 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:13:53.441 slat (usec): min=2, max=15955, avg=204.61, stdev=1288.34 00:13:53.441 clat (usec): min=5808, max=56238, avg=26850.69, stdev=11295.61 00:13:53.441 lat (usec): min=5836, max=56259, avg=27055.30, stdev=11401.19 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[11731], 00:13:53.441 | 30.00th=[20055], 40.00th=[26608], 50.00th=[28705], 60.00th=[31851], 00:13:53.441 | 70.00th=[34341], 80.00th=[38536], 90.00th=[40109], 95.00th=[42206], 00:13:53.441 | 99.00th=[46400], 99.50th=[48497], 99.90th=[52167], 99.95th=[56361], 00:13:53.441 | 99.99th=[56361] 00:13:53.441 write: IOPS=2586, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1009msec); 0 zone resets 00:13:53.441 slat (usec): min=3, max=16606, avg=175.45, stdev=1124.26 00:13:53.441 clat (usec): min=5152, max=45408, avg=22393.48, stdev=9111.85 00:13:53.441 lat (usec): min=8352, max=45423, avg=22568.93, stdev=9184.79 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11469], 00:13:53.441 | 30.00th=[17171], 40.00th=[19792], 50.00th=[22676], 60.00th=[24249], 00:13:53.441 | 70.00th=[27132], 80.00th=[30802], 90.00th=[33817], 95.00th=[39060], 00:13:53.441 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44827], 00:13:53.441 | 99.99th=[45351] 00:13:53.441 bw ( KiB/s): min= 8192, max=12288, per=17.66%, avg=10240.00, stdev=2896.31, samples=2 00:13:53.441 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:13:53.441 lat (msec) : 10=6.71%, 20=28.20%, 50=64.95%, 100=0.14% 00:13:53.441 cpu : usr=2.18%, sys=3.67%, ctx=188, majf=0, minf=15 00:13:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.441 issued rwts: total=2560,2610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.441 job1: (groupid=0, jobs=1): err= 0: pid=1108872: Fri Jul 12 17:02:52 2024 00:13:53.441 read: IOPS=6223, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1004msec) 00:13:53.441 slat (usec): min=3, max=4229, avg=73.58, stdev=404.94 00:13:53.441 clat (usec): min=3283, max=14852, avg=9660.32, stdev=1267.17 00:13:53.441 lat (usec): min=3288, max=14868, avg=9733.90, stdev=1301.94 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[ 6456], 5.00th=[ 7570], 10.00th=[ 8291], 20.00th=[ 8979], 00:13:53.441 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:13:53.441 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11076], 95.00th=[11600], 00:13:53.441 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13829], 99.95th=[13829], 00:13:53.441 | 99.99th=[14877] 00:13:53.441 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:13:53.441 slat (usec): min=3, max=7369, avg=70.50, stdev=352.72 00:13:53.441 clat (usec): min=5283, max=16533, avg=10020.90, stdev=1387.29 00:13:53.441 lat (usec): min=5512, max=17437, avg=10091.40, stdev=1408.27 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[ 6325], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[ 9241], 00:13:53.441 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:13:53.441 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12649], 00:13:53.441 | 99.00th=[14484], 99.50th=[15139], 99.90th=[16581], 99.95th=[16581], 00:13:53.441 | 99.99th=[16581] 00:13:53.441 bw ( KiB/s): min=24672, max=28392, per=45.76%, avg=26532.00, stdev=2630.44, samples=2 00:13:53.441 iops : min= 6168, max= 7098, avg=6633.00, stdev=657.61, samples=2 00:13:53.441 lat (msec) : 4=0.32%, 10=59.31%, 20=40.37% 00:13:53.441 cpu : usr=7.58%, sys=14.26%, ctx=641, majf=0, minf=9 00:13:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.441 issued rwts: total=6248,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.441 job2: (groupid=0, jobs=1): err= 0: pid=1108873: Fri Jul 12 17:02:52 2024 00:13:53.441 read: IOPS=3088, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1007msec) 00:13:53.441 slat (usec): min=2, max=14112, avg=149.82, stdev=903.57 00:13:53.441 clat (usec): min=4129, max=36446, avg=19690.85, stdev=4140.20 00:13:53.441 lat (usec): min=8458, max=36476, avg=19840.67, stdev=4192.26 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[12518], 5.00th=[13960], 10.00th=[15270], 20.00th=[16188], 00:13:53.441 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18744], 60.00th=[20055], 00:13:53.441 | 70.00th=[21890], 80.00th=[23725], 90.00th=[25560], 95.00th=[27919], 00:13:53.441 | 99.00th=[28705], 99.50th=[28705], 99.90th=[32113], 99.95th=[35914], 00:13:53.441 | 99.99th=[36439] 00:13:53.441 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:13:53.441 slat (usec): min=3, max=11103, avg=143.52, stdev=1048.40 00:13:53.441 clat (usec): min=8187, max=35870, avg=18410.74, stdev=2788.93 00:13:53.441 lat (usec): min=8192, max=35883, avg=18554.25, stdev=2975.09 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[12780], 5.00th=[14353], 10.00th=[15270], 20.00th=[16450], 00:13:53.441 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[18744], 00:13:53.441 | 70.00th=[19268], 80.00th=[20841], 90.00th=[22152], 95.00th=[22414], 00:13:53.441 | 99.00th=[25297], 99.50th=[27395], 99.90th=[32900], 99.95th=[33162], 00:13:53.441 | 99.99th=[35914] 00:13:53.441 bw ( KiB/s): min=11832, max=16120, per=24.10%, avg=13976.00, stdev=3032.07, samples=2 00:13:53.441 iops : min= 2958, max= 4030, avg=3494.00, stdev=758.02, samples=2 00:13:53.441 lat (msec) : 10=0.40%, 20=68.45%, 50=31.15% 00:13:53.441 cpu : usr=1.89%, sys=4.77%, ctx=173, majf=0, minf=13 00:13:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.441 issued rwts: total=3110,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.441 job3: (groupid=0, jobs=1): err= 0: pid=1108874: Fri Jul 12 17:02:52 2024 00:13:53.441 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:13:53.441 slat (usec): min=2, max=15527, avg=335.90, stdev=1663.59 00:13:53.441 clat (usec): min=13269, max=68823, avg=40984.04, stdev=9435.25 00:13:53.441 lat (usec): min=13280, max=68857, avg=41319.94, stdev=9560.27 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[16057], 5.00th=[24511], 10.00th=[28967], 20.00th=[33817], 00:13:53.441 | 30.00th=[36963], 40.00th=[39584], 50.00th=[41681], 60.00th=[45351], 00:13:53.441 | 70.00th=[46400], 80.00th=[47973], 90.00th=[52167], 95.00th=[54789], 00:13:53.441 | 99.00th=[60556], 99.50th=[66323], 99.90th=[66323], 99.95th=[68682], 00:13:53.441 | 99.99th=[68682] 00:13:53.441 write: IOPS=1760, BW=7041KiB/s (7210kB/s)(7104KiB/1009msec); 0 zone resets 00:13:53.441 slat (usec): min=3, max=10948, avg=268.95, stdev=1297.86 00:13:53.441 clat (usec): min=1269, max=74016, avg=36291.95, stdev=18773.26 00:13:53.441 lat (usec): min=7145, max=74048, avg=36560.90, stdev=18911.72 00:13:53.441 clat percentiles (usec): 00:13:53.441 | 1.00th=[10159], 5.00th=[12256], 10.00th=[14484], 20.00th=[18744], 00:13:53.441 | 30.00th=[26608], 40.00th=[28967], 50.00th=[30802], 60.00th=[33424], 00:13:53.441 | 70.00th=[45351], 80.00th=[60031], 90.00th=[66847], 95.00th=[70779], 00:13:53.441 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:13:53.441 | 99.99th=[73925] 00:13:53.441 bw ( KiB/s): min= 5088, max= 8096, per=11.37%, avg=6592.00, stdev=2126.98, samples=2 00:13:53.441 iops : min= 1272, max= 2024, avg=1648.00, stdev=531.74, samples=2 00:13:53.441 lat (msec) : 2=0.03%, 10=0.15%, 20=12.92%, 50=66.15%, 100=20.74% 00:13:53.441 cpu : usr=1.39%, sys=2.38%, ctx=170, majf=0, minf=13 00:13:53.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:53.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.441 issued rwts: total=1536,1776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.441 00:13:53.441 Run status group 0 (all jobs): 00:13:53.441 READ: bw=52.1MiB/s (54.6MB/s), 6089KiB/s-24.3MiB/s (6235kB/s-25.5MB/s), io=52.6MiB (55.1MB), run=1004-1009msec 00:13:53.441 WRITE: bw=56.6MiB/s (59.4MB/s), 7041KiB/s-25.9MiB/s (7210kB/s-27.2MB/s), io=57.1MiB (59.9MB), run=1004-1009msec 00:13:53.441 00:13:53.441 Disk stats (read/write): 00:13:53.441 nvme0n1: ios=2098/2550, merge=0/0, ticks=15874/15611, in_queue=31485, util=85.77% 00:13:53.441 nvme0n2: ios=5222/5632, merge=0/0, ticks=24260/25120, in_queue=49380, util=96.74% 00:13:53.441 nvme0n3: ios=2710/3072, merge=0/0, ticks=25492/25925, in_queue=51417, util=97.59% 00:13:53.441 nvme0n4: ios=1187/1536, merge=0/0, ticks=17318/18183, in_queue=35501, util=95.89% 00:13:53.441 17:02:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:53.441 17:02:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1109010 00:13:53.441 17:02:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:53.441 17:02:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:53.441 [global] 00:13:53.441 thread=1 00:13:53.441 invalidate=1 00:13:53.441 rw=read 00:13:53.441 time_based=1 00:13:53.441 runtime=10 00:13:53.441 ioengine=libaio 00:13:53.441 direct=1 00:13:53.441 bs=4096 00:13:53.441 iodepth=1 00:13:53.441 norandommap=1 00:13:53.441 numjobs=1 00:13:53.441 00:13:53.441 [job0] 00:13:53.441 filename=/dev/nvme0n1 00:13:53.441 [job1] 00:13:53.441 filename=/dev/nvme0n2 00:13:53.441 [job2] 00:13:53.441 filename=/dev/nvme0n3 00:13:53.441 [job3] 00:13:53.441 filename=/dev/nvme0n4 00:13:53.441 Could not set queue depth (nvme0n1) 00:13:53.441 Could not set queue depth (nvme0n2) 00:13:53.441 Could not set queue depth (nvme0n3) 00:13:53.441 Could not set queue depth (nvme0n4) 00:13:53.441 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.441 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.441 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.441 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.441 fio-3.35 00:13:53.441 Starting 4 threads 00:13:56.721 17:02:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:56.721 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:56.721 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33054720, buflen=4096 00:13:56.721 fio: pid=1109220, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.721 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.721 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:56.721 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8531968, buflen=4096 00:13:56.721 fio: pid=1109219, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.979 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.979 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:56.979 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5222400, buflen=4096 00:13:56.979 fio: pid=1109217, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:57.237 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.237 17:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:57.237 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46628864, buflen=4096 00:13:57.237 fio: pid=1109218, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:57.237 00:13:57.237 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1109217: Fri Jul 12 17:02:56 2024 00:13:57.237 read: IOPS=369, BW=1475KiB/s (1511kB/s)(5100KiB/3457msec) 00:13:57.237 slat (usec): min=5, max=3921, avg=14.43, stdev=109.59 00:13:57.237 clat (usec): min=188, max=42043, avg=2676.41, stdev=9603.41 00:13:57.237 lat (usec): min=194, max=45052, avg=2690.83, stdev=9617.84 00:13:57.237 clat percentiles (usec): 00:13:57.237 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 233], 00:13:57.237 | 30.00th=[ 249], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 293], 00:13:57.237 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[41157], 00:13:57.237 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:57.237 | 99.99th=[42206] 00:13:57.237 bw ( KiB/s): min= 96, max= 9576, per=6.85%, avg=1684.00, stdev=3866.30, samples=6 00:13:57.237 iops : min= 24, max= 2394, avg=421.00, stdev=966.57, samples=6 00:13:57.237 lat (usec) : 250=31.11%, 500=62.85%, 750=0.08% 00:13:57.237 lat (msec) : 50=5.88% 00:13:57.237 cpu : usr=0.17%, sys=0.46%, ctx=1278, majf=0, minf=1 00:13:57.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.237 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.237 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.237 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1109218: Fri Jul 12 17:02:56 2024 00:13:57.237 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(44.5MiB/3710msec) 00:13:57.237 slat (usec): min=4, max=24117, avg=16.58, stdev=287.99 00:13:57.237 clat (usec): min=171, max=41011, avg=304.15, stdev=805.74 00:13:57.237 lat (usec): min=176, max=41023, avg=320.47, stdev=855.79 00:13:57.237 clat percentiles (usec): 00:13:57.237 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 227], 00:13:57.237 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 281], 00:13:57.237 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 474], 00:13:57.237 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 709], 99.95th=[13960], 00:13:57.237 | 99.99th=[41157] 00:13:57.237 bw ( KiB/s): min=10616, max=15448, per=51.61%, avg=12693.86, stdev=1455.31, samples=7 00:13:57.237 iops : min= 2654, max= 3862, avg=3173.43, stdev=363.85, samples=7 00:13:57.237 lat (usec) : 250=37.57%, 500=59.79%, 750=2.56%, 1000=0.01% 00:13:57.237 lat (msec) : 4=0.01%, 20=0.01%, 50=0.04% 00:13:57.237 cpu : usr=1.32%, sys=4.23%, ctx=11390, majf=0, minf=1 00:13:57.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 issued rwts: total=11385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.238 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1109219: Fri Jul 12 17:02:56 2024 00:13:57.238 read: IOPS=658, BW=2632KiB/s (2695kB/s)(8332KiB/3166msec) 00:13:57.238 slat (nsec): min=5744, max=64678, avg=17622.91, stdev=10001.79 00:13:57.238 clat (usec): min=207, max=53841, avg=1487.07, stdev=6724.86 00:13:57.238 lat (usec): min=214, max=53849, avg=1504.69, stdev=6724.97 00:13:57.238 clat percentiles (usec): 00:13:57.238 | 1.00th=[ 219], 5.00th=[ 237], 10.00th=[ 255], 20.00th=[ 281], 00:13:57.238 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 351], 00:13:57.238 | 70.00th=[ 379], 80.00th=[ 429], 90.00th=[ 502], 95.00th=[ 603], 00:13:57.238 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:13:57.238 | 99.99th=[53740] 00:13:57.238 bw ( KiB/s): min= 96, max= 7368, per=11.23%, avg=2762.67, stdev=3141.44, samples=6 00:13:57.238 iops : min= 24, max= 1842, avg=690.67, stdev=785.36, samples=6 00:13:57.238 lat (usec) : 250=8.45%, 500=81.38%, 750=7.15%, 1000=0.10% 00:13:57.238 lat (msec) : 2=0.10%, 50=2.74%, 100=0.05% 00:13:57.238 cpu : usr=0.66%, sys=1.26%, ctx=2084, majf=0, minf=1 00:13:57.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.238 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1109220: Fri Jul 12 17:02:56 2024 00:13:57.238 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(31.5MiB/2899msec) 00:13:57.238 slat (usec): min=5, max=104, avg=12.99, stdev= 8.60 00:13:57.238 clat (usec): min=186, max=42413, avg=340.53, stdev=1130.11 00:13:57.238 lat (usec): min=192, max=42429, avg=353.51, stdev=1130.81 00:13:57.238 clat percentiles (usec): 00:13:57.238 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 245], 00:13:57.238 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 306], 00:13:57.238 | 70.00th=[ 326], 80.00th=[ 363], 90.00th=[ 441], 95.00th=[ 490], 00:13:57.238 | 99.00th=[ 586], 99.50th=[ 635], 99.90th=[ 832], 99.95th=[41157], 00:13:57.238 | 99.99th=[42206] 00:13:57.238 bw ( KiB/s): min= 4968, max=12496, per=43.24%, avg=10635.20, stdev=3193.85, samples=5 00:13:57.238 iops : min= 1242, max= 3124, avg=2658.80, stdev=798.46, samples=5 00:13:57.238 lat (usec) : 250=24.90%, 500=71.03%, 750=3.89%, 1000=0.06% 00:13:57.238 lat (msec) : 2=0.02%, 50=0.07% 00:13:57.238 cpu : usr=1.31%, sys=4.14%, ctx=8071, majf=0, minf=1 00:13:57.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.238 issued rwts: total=8071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.238 00:13:57.238 Run status group 0 (all jobs): 00:13:57.238 READ: bw=24.0MiB/s (25.2MB/s), 1475KiB/s-12.0MiB/s (1511kB/s-12.6MB/s), io=89.1MiB (93.4MB), run=2899-3710msec 00:13:57.238 00:13:57.238 Disk stats (read/write): 00:13:57.238 nvme0n1: ios=1272/0, merge=0/0, ticks=3285/0, in_queue=3285, util=95.65% 00:13:57.238 nvme0n2: ios=11381/0, merge=0/0, ticks=3281/0, in_queue=3281, util=95.01% 00:13:57.238 nvme0n3: ios=2079/0, merge=0/0, ticks=2978/0, in_queue=2978, util=96.34% 00:13:57.238 nvme0n4: ios=8006/0, merge=0/0, ticks=2696/0, in_queue=2696, util=96.71% 00:13:57.496 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.496 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:57.754 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.754 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:58.012 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.012 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:58.270 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.270 17:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:58.527 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:58.527 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1109010 00:13:58.527 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:58.527 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:58.784 nvmf hotplug test: fio failed as expected 00:13:58.784 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.042 rmmod nvme_tcp 00:13:59.042 rmmod nvme_fabrics 00:13:59.042 rmmod nvme_keyring 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1107090 ']' 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1107090 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1107090 ']' 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1107090 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1107090 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1107090' 00:13:59.042 killing process with pid 1107090 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1107090 00:13:59.042 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1107090 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.301 17:02:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.206 17:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.206 00:14:01.206 real 0m23.915s 00:14:01.206 user 1m23.011s 00:14:01.206 sys 0m7.209s 00:14:01.206 17:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.206 17:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.206 ************************************ 00:14:01.206 END TEST nvmf_fio_target 00:14:01.207 ************************************ 00:14:01.465 17:03:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:01.465 17:03:00 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:01.465 17:03:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.465 17:03:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.465 17:03:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.465 ************************************ 00:14:01.465 START TEST nvmf_bdevio 00:14:01.465 ************************************ 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:01.465 * Looking for test storage... 00:14:01.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.465 17:03:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.466 17:03:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.466 17:03:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:03.377 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:03.377 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:03.377 Found net devices under 0000:84:00.0: cvl_0_0 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:03.377 Found net devices under 0000:84:00.1: cvl_0_1 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.377 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.378 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.378 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.378 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.378 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:14:03.635 00:14:03.635 --- 10.0.0.2 ping statistics --- 00:14:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.635 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:03.635 00:14:03.635 --- 10.0.0.1 ping statistics --- 00:14:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.635 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1111982 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1111982 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1111982 ']' 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.635 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.635 [2024-07-12 17:03:03.225975] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:14:03.635 [2024-07-12 17:03:03.226072] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.635 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.636 [2024-07-12 17:03:03.290195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.893 [2024-07-12 17:03:03.400610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.893 [2024-07-12 17:03:03.400657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.893 [2024-07-12 17:03:03.400681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.893 [2024-07-12 17:03:03.400692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.893 [2024-07-12 17:03:03.400701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.893 [2024-07-12 17:03:03.400777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:03.893 [2024-07-12 17:03:03.400823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:03.893 [2024-07-12 17:03:03.400893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:03.893 [2024-07-12 17:03:03.400896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.893 [2024-07-12 17:03:03.562694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.893 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.151 Malloc0 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.151 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:04.152 [2024-07-12 17:03:03.617140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:04.152 { 00:14:04.152 "params": { 00:14:04.152 "name": "Nvme$subsystem", 00:14:04.152 "trtype": "$TEST_TRANSPORT", 00:14:04.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:04.152 "adrfam": "ipv4", 00:14:04.152 "trsvcid": "$NVMF_PORT", 00:14:04.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:04.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:04.152 "hdgst": ${hdgst:-false}, 00:14:04.152 "ddgst": ${ddgst:-false} 00:14:04.152 }, 00:14:04.152 "method": "bdev_nvme_attach_controller" 00:14:04.152 } 00:14:04.152 EOF 00:14:04.152 )") 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:04.152 17:03:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:04.152 "params": { 00:14:04.152 "name": "Nvme1", 00:14:04.152 "trtype": "tcp", 00:14:04.152 "traddr": "10.0.0.2", 00:14:04.152 "adrfam": "ipv4", 00:14:04.152 "trsvcid": "4420", 00:14:04.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.152 "hdgst": false, 00:14:04.152 "ddgst": false 00:14:04.152 }, 00:14:04.152 "method": "bdev_nvme_attach_controller" 00:14:04.152 }' 00:14:04.152 [2024-07-12 17:03:03.663025] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:14:04.152 [2024-07-12 17:03:03.663115] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112008 ] 00:14:04.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.152 [2024-07-12 17:03:03.725942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.152 [2024-07-12 17:03:03.841376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.152 [2024-07-12 17:03:03.841426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.152 [2024-07-12 17:03:03.841429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.409 I/O targets: 00:14:04.409 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:04.409 00:14:04.409 00:14:04.409 CUnit - A unit testing framework for C - Version 2.1-3 00:14:04.409 http://cunit.sourceforge.net/ 00:14:04.409 00:14:04.409 00:14:04.409 Suite: bdevio tests on: Nvme1n1 00:14:04.409 Test: blockdev write read block ...passed 00:14:04.666 Test: blockdev write zeroes read block ...passed 00:14:04.666 Test: blockdev write zeroes read no split ...passed 00:14:04.666 Test: blockdev write zeroes read split ...passed 00:14:04.666 Test: blockdev write zeroes read split partial ...passed 00:14:04.666 Test: blockdev reset ...[2024-07-12 17:03:04.226780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:04.666 [2024-07-12 17:03:04.226901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2358bd0 (9): Bad file descriptor 00:14:04.666 [2024-07-12 17:03:04.329334] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:04.666 passed 00:14:04.666 Test: blockdev write read 8 blocks ...passed 00:14:04.666 Test: blockdev write read size > 128k ...passed 00:14:04.666 Test: blockdev write read invalid size ...passed 00:14:04.923 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:04.923 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:04.923 Test: blockdev write read max offset ...passed 00:14:04.923 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:04.923 Test: blockdev writev readv 8 blocks ...passed 00:14:04.923 Test: blockdev writev readv 30 x 1block ...passed 00:14:04.923 Test: blockdev writev readv block ...passed 00:14:04.923 Test: blockdev writev readv size > 128k ...passed 00:14:04.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:04.923 Test: blockdev comparev and writev ...[2024-07-12 17:03:04.543158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.543194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.543220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.543237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.543688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.543712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.543734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.543759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.544184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.544205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.544221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.544583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.923 [2024-07-12 17:03:04.544616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:04.923 [2024-07-12 17:03:04.544637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.924 [2024-07-12 17:03:04.544654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:04.924 passed 00:14:05.208 Test: blockdev nvme passthru rw ...passed 00:14:05.208 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:03:04.628029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.208 [2024-07-12 17:03:04.628056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:05.208 [2024-07-12 17:03:04.628212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.208 [2024-07-12 17:03:04.628235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:05.208 [2024-07-12 17:03:04.628391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.208 [2024-07-12 17:03:04.628414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:05.209 [2024-07-12 17:03:04.628568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.209 [2024-07-12 17:03:04.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:05.209 passed 00:14:05.209 Test: blockdev nvme admin passthru ...passed 00:14:05.209 Test: blockdev copy ...passed 00:14:05.209 00:14:05.209 Run Summary: Type Total Ran Passed Failed Inactive 00:14:05.209 suites 1 1 n/a 0 0 00:14:05.209 tests 23 23 23 0 0 00:14:05.209 asserts 152 152 152 0 n/a 00:14:05.209 00:14:05.209 Elapsed time = 1.317 seconds 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.465 rmmod nvme_tcp 00:14:05.465 rmmod nvme_fabrics 00:14:05.465 rmmod nvme_keyring 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1111982 ']' 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1111982 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1111982 ']' 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1111982 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111982 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111982' 00:14:05.465 killing process with pid 1111982 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1111982 00:14:05.465 17:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1111982 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.724 17:03:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.628 17:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.628 00:14:07.628 real 0m6.386s 00:14:07.628 user 0m10.292s 00:14:07.628 sys 0m2.111s 00:14:07.887 17:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.887 17:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:07.887 ************************************ 00:14:07.887 END TEST nvmf_bdevio 00:14:07.887 ************************************ 00:14:07.887 17:03:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.887 17:03:07 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:07.887 17:03:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.887 17:03:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.887 17:03:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.887 ************************************ 00:14:07.887 START TEST nvmf_auth_target 00:14:07.887 ************************************ 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:07.887 * Looking for test storage... 00:14:07.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.887 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.785 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:09.786 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:09.786 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:09.786 Found net devices under 0000:84:00.0: cvl_0_0 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:09.786 Found net devices under 0000:84:00.1: cvl_0_1 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.786 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:14:10.044 00:14:10.044 --- 10.0.0.2 ping statistics --- 00:14:10.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.044 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:10.044 00:14:10.044 --- 10.0.0.1 ping statistics --- 00:14:10.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.044 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1114597 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1114597 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1114597 ']' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.044 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1114745 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=615460f14915534e4a2a8ec1b9c8725b2a6398e3e9c532db 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7yc 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 615460f14915534e4a2a8ec1b9c8725b2a6398e3e9c532db 0 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 615460f14915534e4a2a8ec1b9c8725b2a6398e3e9c532db 0 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=615460f14915534e4a2a8ec1b9c8725b2a6398e3e9c532db 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7yc 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7yc 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.7yc 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=33b4004e432bc2813beca58f293f7801e391775ccd4f8367cda1413d71c013bf 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hcS 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 33b4004e432bc2813beca58f293f7801e391775ccd4f8367cda1413d71c013bf 3 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 33b4004e432bc2813beca58f293f7801e391775ccd4f8367cda1413d71c013bf 3 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=33b4004e432bc2813beca58f293f7801e391775ccd4f8367cda1413d71c013bf 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hcS 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hcS 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.hcS 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:10.302 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.560 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea45439972e813d754b955def70ca276 00:14:10.561 17:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Oll 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea45439972e813d754b955def70ca276 1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea45439972e813d754b955def70ca276 1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea45439972e813d754b955def70ca276 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Oll 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Oll 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Oll 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1070d3becbc27b7c65edd3681f3b5cfec7a3973b01f9e936 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nxp 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1070d3becbc27b7c65edd3681f3b5cfec7a3973b01f9e936 2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1070d3becbc27b7c65edd3681f3b5cfec7a3973b01f9e936 2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1070d3becbc27b7c65edd3681f3b5cfec7a3973b01f9e936 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nxp 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nxp 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Nxp 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c0af26cb29bc071165b1070972ec20535aaabcacb72dfc60 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tkn 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c0af26cb29bc071165b1070972ec20535aaabcacb72dfc60 2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c0af26cb29bc071165b1070972ec20535aaabcacb72dfc60 2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c0af26cb29bc071165b1070972ec20535aaabcacb72dfc60 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tkn 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tkn 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.tkn 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1387ef0bc4dbd3827d66788f1b6eb417 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qjq 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1387ef0bc4dbd3827d66788f1b6eb417 1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1387ef0bc4dbd3827d66788f1b6eb417 1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1387ef0bc4dbd3827d66788f1b6eb417 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qjq 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qjq 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Qjq 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9e7b3ab6143bb6fa2a2094c673ee3d2d8b60e50cd2771bef48de64dfcefaabd 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LQ1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9e7b3ab6143bb6fa2a2094c673ee3d2d8b60e50cd2771bef48de64dfcefaabd 3 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9e7b3ab6143bb6fa2a2094c673ee3d2d8b60e50cd2771bef48de64dfcefaabd 3 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9e7b3ab6143bb6fa2a2094c673ee3d2d8b60e50cd2771bef48de64dfcefaabd 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:10.561 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LQ1 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LQ1 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.LQ1 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1114597 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1114597 ']' 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.820 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1114745 /var/tmp/host.sock 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1114745 ']' 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:11.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.110 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7yc 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.395 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.7yc 00:14:11.396 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.7yc 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.hcS ]] 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hcS 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hcS 00:14:11.653 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hcS 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Oll 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Oll 00:14:11.910 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Oll 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Nxp ]] 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Nxp 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Nxp 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Nxp 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tkn 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.167 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.425 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.425 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tkn 00:14:12.425 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tkn 00:14:12.425 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Qjq ]] 00:14:12.425 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qjq 00:14:12.425 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.425 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qjq 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qjq 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LQ1 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.682 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LQ1 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LQ1 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.939 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.196 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.761 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.761 { 00:14:13.761 "cntlid": 1, 00:14:13.761 "qid": 0, 00:14:13.761 "state": "enabled", 00:14:13.761 "thread": "nvmf_tgt_poll_group_000", 00:14:13.761 "listen_address": { 00:14:13.761 "trtype": "TCP", 00:14:13.761 "adrfam": "IPv4", 00:14:13.761 "traddr": "10.0.0.2", 00:14:13.761 "trsvcid": "4420" 00:14:13.761 }, 00:14:13.761 "peer_address": { 00:14:13.761 "trtype": "TCP", 00:14:13.761 "adrfam": "IPv4", 00:14:13.761 "traddr": "10.0.0.1", 00:14:13.761 "trsvcid": "42156" 00:14:13.761 }, 00:14:13.761 "auth": { 00:14:13.761 "state": "completed", 00:14:13.761 "digest": "sha256", 00:14:13.761 "dhgroup": "null" 00:14:13.761 } 00:14:13.761 } 00:14:13.761 ]' 00:14:13.761 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.018 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.275 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.206 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.463 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.721 00:14:15.721 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.721 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.721 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.979 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.979 { 00:14:15.979 "cntlid": 3, 00:14:15.979 "qid": 0, 00:14:15.979 "state": "enabled", 00:14:15.980 "thread": "nvmf_tgt_poll_group_000", 00:14:15.980 "listen_address": { 00:14:15.980 "trtype": "TCP", 00:14:15.980 "adrfam": "IPv4", 00:14:15.980 "traddr": "10.0.0.2", 00:14:15.980 "trsvcid": "4420" 00:14:15.980 }, 00:14:15.980 "peer_address": { 00:14:15.980 "trtype": "TCP", 00:14:15.980 "adrfam": "IPv4", 00:14:15.980 "traddr": "10.0.0.1", 00:14:15.980 "trsvcid": "42174" 00:14:15.980 }, 00:14:15.980 "auth": { 00:14:15.980 "state": "completed", 00:14:15.980 "digest": "sha256", 00:14:15.980 "dhgroup": "null" 00:14:15.980 } 00:14:15.980 } 00:14:15.980 ]' 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.980 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.237 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:14:17.168 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.169 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.169 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.426 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.426 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.426 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.426 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.426 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.683 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.940 00:14:17.940 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.940 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.940 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.197 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.197 { 00:14:18.197 "cntlid": 5, 00:14:18.197 "qid": 0, 00:14:18.197 "state": "enabled", 00:14:18.197 "thread": "nvmf_tgt_poll_group_000", 00:14:18.197 "listen_address": { 00:14:18.197 "trtype": "TCP", 00:14:18.197 "adrfam": "IPv4", 00:14:18.197 "traddr": "10.0.0.2", 00:14:18.197 "trsvcid": "4420" 00:14:18.197 }, 00:14:18.198 "peer_address": { 00:14:18.198 "trtype": "TCP", 00:14:18.198 "adrfam": "IPv4", 00:14:18.198 "traddr": "10.0.0.1", 00:14:18.198 "trsvcid": "42196" 00:14:18.198 }, 00:14:18.198 "auth": { 00:14:18.198 "state": "completed", 00:14:18.198 "digest": "sha256", 00:14:18.198 "dhgroup": "null" 00:14:18.198 } 00:14:18.198 } 00:14:18.198 ]' 00:14:18.198 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.198 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.198 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.198 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.198 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.455 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.455 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.455 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.712 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.644 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.902 00:14:19.902 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.902 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.902 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.159 { 00:14:20.159 "cntlid": 7, 00:14:20.159 "qid": 0, 00:14:20.159 "state": "enabled", 00:14:20.159 "thread": "nvmf_tgt_poll_group_000", 00:14:20.159 "listen_address": { 00:14:20.159 "trtype": "TCP", 00:14:20.159 "adrfam": "IPv4", 00:14:20.159 "traddr": "10.0.0.2", 00:14:20.159 "trsvcid": "4420" 00:14:20.159 }, 00:14:20.159 "peer_address": { 00:14:20.159 "trtype": "TCP", 00:14:20.159 "adrfam": "IPv4", 00:14:20.159 "traddr": "10.0.0.1", 00:14:20.159 "trsvcid": "42212" 00:14:20.159 }, 00:14:20.159 "auth": { 00:14:20.159 "state": "completed", 00:14:20.159 "digest": "sha256", 00:14:20.159 "dhgroup": "null" 00:14:20.159 } 00:14:20.159 } 00:14:20.159 ]' 00:14:20.159 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.416 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.673 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.605 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.862 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.119 00:14:22.119 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.119 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.119 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.375 { 00:14:22.375 "cntlid": 9, 00:14:22.375 "qid": 0, 00:14:22.375 "state": "enabled", 00:14:22.375 "thread": "nvmf_tgt_poll_group_000", 00:14:22.375 "listen_address": { 00:14:22.375 "trtype": "TCP", 00:14:22.375 "adrfam": "IPv4", 00:14:22.375 "traddr": "10.0.0.2", 00:14:22.375 "trsvcid": "4420" 00:14:22.375 }, 00:14:22.375 "peer_address": { 00:14:22.375 "trtype": "TCP", 00:14:22.375 "adrfam": "IPv4", 00:14:22.375 "traddr": "10.0.0.1", 00:14:22.375 "trsvcid": "54400" 00:14:22.375 }, 00:14:22.375 "auth": { 00:14:22.375 "state": "completed", 00:14:22.375 "digest": "sha256", 00:14:22.375 "dhgroup": "ffdhe2048" 00:14:22.375 } 00:14:22.375 } 00:14:22.375 ]' 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.375 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.375 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.376 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.376 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.376 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.376 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.633 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.565 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.823 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.389 00:14:24.389 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.389 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.389 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.389 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.389 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.389 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.389 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.647 { 00:14:24.647 "cntlid": 11, 00:14:24.647 "qid": 0, 00:14:24.647 "state": "enabled", 00:14:24.647 "thread": "nvmf_tgt_poll_group_000", 00:14:24.647 "listen_address": { 00:14:24.647 "trtype": "TCP", 00:14:24.647 "adrfam": "IPv4", 00:14:24.647 "traddr": "10.0.0.2", 00:14:24.647 "trsvcid": "4420" 00:14:24.647 }, 00:14:24.647 "peer_address": { 00:14:24.647 "trtype": "TCP", 00:14:24.647 "adrfam": "IPv4", 00:14:24.647 "traddr": "10.0.0.1", 00:14:24.647 "trsvcid": "54426" 00:14:24.647 }, 00:14:24.647 "auth": { 00:14:24.647 "state": "completed", 00:14:24.647 "digest": "sha256", 00:14:24.647 "dhgroup": "ffdhe2048" 00:14:24.647 } 00:14:24.647 } 00:14:24.647 ]' 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.647 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.905 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.838 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.096 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.353 00:14:26.353 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.353 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.353 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.611 { 00:14:26.611 "cntlid": 13, 00:14:26.611 "qid": 0, 00:14:26.611 "state": "enabled", 00:14:26.611 "thread": "nvmf_tgt_poll_group_000", 00:14:26.611 "listen_address": { 00:14:26.611 "trtype": "TCP", 00:14:26.611 "adrfam": "IPv4", 00:14:26.611 "traddr": "10.0.0.2", 00:14:26.611 "trsvcid": "4420" 00:14:26.611 }, 00:14:26.611 "peer_address": { 00:14:26.611 "trtype": "TCP", 00:14:26.611 "adrfam": "IPv4", 00:14:26.611 "traddr": "10.0.0.1", 00:14:26.611 "trsvcid": "54448" 00:14:26.611 }, 00:14:26.611 "auth": { 00:14:26.611 "state": "completed", 00:14:26.611 "digest": "sha256", 00:14:26.611 "dhgroup": "ffdhe2048" 00:14:26.611 } 00:14:26.611 } 00:14:26.611 ]' 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.611 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.869 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.800 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.058 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.316 00:14:28.573 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.573 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.573 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.831 { 00:14:28.831 "cntlid": 15, 00:14:28.831 "qid": 0, 00:14:28.831 "state": "enabled", 00:14:28.831 "thread": "nvmf_tgt_poll_group_000", 00:14:28.831 "listen_address": { 00:14:28.831 "trtype": "TCP", 00:14:28.831 "adrfam": "IPv4", 00:14:28.831 "traddr": "10.0.0.2", 00:14:28.831 "trsvcid": "4420" 00:14:28.831 }, 00:14:28.831 "peer_address": { 00:14:28.831 "trtype": "TCP", 00:14:28.831 "adrfam": "IPv4", 00:14:28.831 "traddr": "10.0.0.1", 00:14:28.831 "trsvcid": "54488" 00:14:28.831 }, 00:14:28.831 "auth": { 00:14:28.831 "state": "completed", 00:14:28.831 "digest": "sha256", 00:14:28.831 "dhgroup": "ffdhe2048" 00:14:28.831 } 00:14:28.831 } 00:14:28.831 ]' 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.831 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.089 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.021 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.279 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.537 00:14:30.537 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.537 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.537 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.795 { 00:14:30.795 "cntlid": 17, 00:14:30.795 "qid": 0, 00:14:30.795 "state": "enabled", 00:14:30.795 "thread": "nvmf_tgt_poll_group_000", 00:14:30.795 "listen_address": { 00:14:30.795 "trtype": "TCP", 00:14:30.795 "adrfam": "IPv4", 00:14:30.795 "traddr": "10.0.0.2", 00:14:30.795 "trsvcid": "4420" 00:14:30.795 }, 00:14:30.795 "peer_address": { 00:14:30.795 "trtype": "TCP", 00:14:30.795 "adrfam": "IPv4", 00:14:30.795 "traddr": "10.0.0.1", 00:14:30.795 "trsvcid": "54504" 00:14:30.795 }, 00:14:30.795 "auth": { 00:14:30.795 "state": "completed", 00:14:30.795 "digest": "sha256", 00:14:30.795 "dhgroup": "ffdhe3072" 00:14:30.795 } 00:14:30.795 } 00:14:30.795 ]' 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.795 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.053 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.053 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.053 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.053 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.988 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.318 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.605 00:14:32.605 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.605 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.605 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.862 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.862 { 00:14:32.862 "cntlid": 19, 00:14:32.862 "qid": 0, 00:14:32.862 "state": "enabled", 00:14:32.862 "thread": "nvmf_tgt_poll_group_000", 00:14:32.862 "listen_address": { 00:14:32.862 "trtype": "TCP", 00:14:32.862 "adrfam": "IPv4", 00:14:32.863 "traddr": "10.0.0.2", 00:14:32.863 "trsvcid": "4420" 00:14:32.863 }, 00:14:32.863 "peer_address": { 00:14:32.863 "trtype": "TCP", 00:14:32.863 "adrfam": "IPv4", 00:14:32.863 "traddr": "10.0.0.1", 00:14:32.863 "trsvcid": "56152" 00:14:32.863 }, 00:14:32.863 "auth": { 00:14:32.863 "state": "completed", 00:14:32.863 "digest": "sha256", 00:14:32.863 "dhgroup": "ffdhe3072" 00:14:32.863 } 00:14:32.863 } 00:14:32.863 ]' 00:14:32.863 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.863 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.863 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.135 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.135 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.135 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.135 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.135 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.401 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.333 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.333 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.333 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.333 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.334 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.334 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.897 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.897 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.154 { 00:14:35.154 "cntlid": 21, 00:14:35.154 "qid": 0, 00:14:35.154 "state": "enabled", 00:14:35.154 "thread": "nvmf_tgt_poll_group_000", 00:14:35.154 "listen_address": { 00:14:35.154 "trtype": "TCP", 00:14:35.154 "adrfam": "IPv4", 00:14:35.154 "traddr": "10.0.0.2", 00:14:35.154 "trsvcid": "4420" 00:14:35.154 }, 00:14:35.154 "peer_address": { 00:14:35.154 "trtype": "TCP", 00:14:35.154 "adrfam": "IPv4", 00:14:35.154 "traddr": "10.0.0.1", 00:14:35.154 "trsvcid": "56186" 00:14:35.154 }, 00:14:35.154 "auth": { 00:14:35.154 "state": "completed", 00:14:35.154 "digest": "sha256", 00:14:35.154 "dhgroup": "ffdhe3072" 00:14:35.154 } 00:14:35.154 } 00:14:35.154 ]' 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.154 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.411 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.347 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.604 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:36.604 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.604 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.605 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:36.862 00:14:36.862 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.862 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.862 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.120 { 00:14:37.120 "cntlid": 23, 00:14:37.120 "qid": 0, 00:14:37.120 "state": "enabled", 00:14:37.120 "thread": "nvmf_tgt_poll_group_000", 00:14:37.120 "listen_address": { 00:14:37.120 "trtype": "TCP", 00:14:37.120 "adrfam": "IPv4", 00:14:37.120 "traddr": "10.0.0.2", 00:14:37.120 "trsvcid": "4420" 00:14:37.120 }, 00:14:37.120 "peer_address": { 00:14:37.120 "trtype": "TCP", 00:14:37.120 "adrfam": "IPv4", 00:14:37.120 "traddr": "10.0.0.1", 00:14:37.120 "trsvcid": "56216" 00:14:37.120 }, 00:14:37.120 "auth": { 00:14:37.120 "state": "completed", 00:14:37.120 "digest": "sha256", 00:14:37.120 "dhgroup": "ffdhe3072" 00:14:37.120 } 00:14:37.120 } 00:14:37.120 ]' 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.120 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.377 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.377 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.377 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.377 17:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.634 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.567 17:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.824 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.083 00:14:39.083 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.083 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.083 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.341 { 00:14:39.341 "cntlid": 25, 00:14:39.341 "qid": 0, 00:14:39.341 "state": "enabled", 00:14:39.341 "thread": "nvmf_tgt_poll_group_000", 00:14:39.341 "listen_address": { 00:14:39.341 "trtype": "TCP", 00:14:39.341 "adrfam": "IPv4", 00:14:39.341 "traddr": "10.0.0.2", 00:14:39.341 "trsvcid": "4420" 00:14:39.341 }, 00:14:39.341 "peer_address": { 00:14:39.341 "trtype": "TCP", 00:14:39.341 "adrfam": "IPv4", 00:14:39.341 "traddr": "10.0.0.1", 00:14:39.341 "trsvcid": "56244" 00:14:39.341 }, 00:14:39.341 "auth": { 00:14:39.341 "state": "completed", 00:14:39.341 "digest": "sha256", 00:14:39.341 "dhgroup": "ffdhe4096" 00:14:39.341 } 00:14:39.341 } 00:14:39.341 ]' 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.341 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.341 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.341 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.599 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.599 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.599 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.856 17:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.789 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.047 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.304 00:14:41.304 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.304 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.304 17:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.563 { 00:14:41.563 "cntlid": 27, 00:14:41.563 "qid": 0, 00:14:41.563 "state": "enabled", 00:14:41.563 "thread": "nvmf_tgt_poll_group_000", 00:14:41.563 "listen_address": { 00:14:41.563 "trtype": "TCP", 00:14:41.563 "adrfam": "IPv4", 00:14:41.563 "traddr": "10.0.0.2", 00:14:41.563 "trsvcid": "4420" 00:14:41.563 }, 00:14:41.563 "peer_address": { 00:14:41.563 "trtype": "TCP", 00:14:41.563 "adrfam": "IPv4", 00:14:41.563 "traddr": "10.0.0.1", 00:14:41.563 "trsvcid": "55068" 00:14:41.563 }, 00:14:41.563 "auth": { 00:14:41.563 "state": "completed", 00:14:41.563 "digest": "sha256", 00:14:41.563 "dhgroup": "ffdhe4096" 00:14:41.563 } 00:14:41.563 } 00:14:41.563 ]' 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:41.563 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.820 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.820 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.821 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.078 17:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.012 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.270 17:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.526 00:14:43.526 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.526 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.526 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.783 { 00:14:43.783 "cntlid": 29, 00:14:43.783 "qid": 0, 00:14:43.783 "state": "enabled", 00:14:43.783 "thread": "nvmf_tgt_poll_group_000", 00:14:43.783 "listen_address": { 00:14:43.783 "trtype": "TCP", 00:14:43.783 "adrfam": "IPv4", 00:14:43.783 "traddr": "10.0.0.2", 00:14:43.783 "trsvcid": "4420" 00:14:43.783 }, 00:14:43.783 "peer_address": { 00:14:43.783 "trtype": "TCP", 00:14:43.783 "adrfam": "IPv4", 00:14:43.783 "traddr": "10.0.0.1", 00:14:43.783 "trsvcid": "55100" 00:14:43.783 }, 00:14:43.783 "auth": { 00:14:43.783 "state": "completed", 00:14:43.783 "digest": "sha256", 00:14:43.783 "dhgroup": "ffdhe4096" 00:14:43.783 } 00:14:43.783 } 00:14:43.783 ]' 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.783 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.040 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.040 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.040 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.040 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.040 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.298 17:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.229 17:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.485 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.740 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.997 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.254 { 00:14:46.254 "cntlid": 31, 00:14:46.254 "qid": 0, 00:14:46.254 "state": "enabled", 00:14:46.254 "thread": "nvmf_tgt_poll_group_000", 00:14:46.254 "listen_address": { 00:14:46.254 "trtype": "TCP", 00:14:46.254 "adrfam": "IPv4", 00:14:46.254 "traddr": "10.0.0.2", 00:14:46.254 "trsvcid": "4420" 00:14:46.254 }, 00:14:46.254 "peer_address": { 00:14:46.254 "trtype": "TCP", 00:14:46.254 "adrfam": "IPv4", 00:14:46.254 "traddr": "10.0.0.1", 00:14:46.254 "trsvcid": "55116" 00:14:46.254 }, 00:14:46.254 "auth": { 00:14:46.254 "state": "completed", 00:14:46.254 "digest": "sha256", 00:14:46.254 "dhgroup": "ffdhe4096" 00:14:46.254 } 00:14:46.254 } 00:14:46.254 ]' 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.254 17:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.511 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.441 17:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.698 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.263 00:14:48.263 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.263 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.263 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.520 { 00:14:48.520 "cntlid": 33, 00:14:48.520 "qid": 0, 00:14:48.520 "state": "enabled", 00:14:48.520 "thread": "nvmf_tgt_poll_group_000", 00:14:48.520 "listen_address": { 00:14:48.520 "trtype": "TCP", 00:14:48.520 "adrfam": "IPv4", 00:14:48.520 "traddr": "10.0.0.2", 00:14:48.520 "trsvcid": "4420" 00:14:48.520 }, 00:14:48.520 "peer_address": { 00:14:48.520 "trtype": "TCP", 00:14:48.520 "adrfam": "IPv4", 00:14:48.520 "traddr": "10.0.0.1", 00:14:48.520 "trsvcid": "55140" 00:14:48.520 }, 00:14:48.520 "auth": { 00:14:48.520 "state": "completed", 00:14:48.520 "digest": "sha256", 00:14:48.520 "dhgroup": "ffdhe6144" 00:14:48.520 } 00:14:48.520 } 00:14:48.520 ]' 00:14:48.520 17:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.520 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.777 17:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.710 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.967 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.532 00:14:50.532 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.532 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.532 17:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.532 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.532 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.532 17:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.532 17:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.790 { 00:14:50.790 "cntlid": 35, 00:14:50.790 "qid": 0, 00:14:50.790 "state": "enabled", 00:14:50.790 "thread": "nvmf_tgt_poll_group_000", 00:14:50.790 "listen_address": { 00:14:50.790 "trtype": "TCP", 00:14:50.790 "adrfam": "IPv4", 00:14:50.790 "traddr": "10.0.0.2", 00:14:50.790 "trsvcid": "4420" 00:14:50.790 }, 00:14:50.790 "peer_address": { 00:14:50.790 "trtype": "TCP", 00:14:50.790 "adrfam": "IPv4", 00:14:50.790 "traddr": "10.0.0.1", 00:14:50.790 "trsvcid": "55166" 00:14:50.790 }, 00:14:50.790 "auth": { 00:14:50.790 "state": "completed", 00:14:50.790 "digest": "sha256", 00:14:50.790 "dhgroup": "ffdhe6144" 00:14:50.790 } 00:14:50.790 } 00:14:50.790 ]' 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.790 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.048 17:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.981 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.239 17:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.805 00:14:52.805 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.805 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.805 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.063 { 00:14:53.063 "cntlid": 37, 00:14:53.063 "qid": 0, 00:14:53.063 "state": "enabled", 00:14:53.063 "thread": "nvmf_tgt_poll_group_000", 00:14:53.063 "listen_address": { 00:14:53.063 "trtype": "TCP", 00:14:53.063 "adrfam": "IPv4", 00:14:53.063 "traddr": "10.0.0.2", 00:14:53.063 "trsvcid": "4420" 00:14:53.063 }, 00:14:53.063 "peer_address": { 00:14:53.063 "trtype": "TCP", 00:14:53.063 "adrfam": "IPv4", 00:14:53.063 "traddr": "10.0.0.1", 00:14:53.063 "trsvcid": "44230" 00:14:53.063 }, 00:14:53.063 "auth": { 00:14:53.063 "state": "completed", 00:14:53.063 "digest": "sha256", 00:14:53.063 "dhgroup": "ffdhe6144" 00:14:53.063 } 00:14:53.063 } 00:14:53.063 ]' 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.063 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.425 17:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.375 17:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.632 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.197 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.197 { 00:14:55.197 "cntlid": 39, 00:14:55.197 "qid": 0, 00:14:55.197 "state": "enabled", 00:14:55.197 "thread": "nvmf_tgt_poll_group_000", 00:14:55.197 "listen_address": { 00:14:55.197 "trtype": "TCP", 00:14:55.197 "adrfam": "IPv4", 00:14:55.197 "traddr": "10.0.0.2", 00:14:55.197 "trsvcid": "4420" 00:14:55.197 }, 00:14:55.197 "peer_address": { 00:14:55.197 "trtype": "TCP", 00:14:55.197 "adrfam": "IPv4", 00:14:55.197 "traddr": "10.0.0.1", 00:14:55.197 "trsvcid": "44252" 00:14:55.197 }, 00:14:55.197 "auth": { 00:14:55.197 "state": "completed", 00:14:55.197 "digest": "sha256", 00:14:55.197 "dhgroup": "ffdhe6144" 00:14:55.197 } 00:14:55.197 } 00:14:55.197 ]' 00:14:55.197 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.455 17:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.712 17:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.645 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.902 17:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.833 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.833 { 00:14:57.833 "cntlid": 41, 00:14:57.833 "qid": 0, 00:14:57.833 "state": "enabled", 00:14:57.833 "thread": "nvmf_tgt_poll_group_000", 00:14:57.833 "listen_address": { 00:14:57.833 "trtype": "TCP", 00:14:57.833 "adrfam": "IPv4", 00:14:57.833 "traddr": "10.0.0.2", 00:14:57.833 "trsvcid": "4420" 00:14:57.833 }, 00:14:57.833 "peer_address": { 00:14:57.833 "trtype": "TCP", 00:14:57.833 "adrfam": "IPv4", 00:14:57.833 "traddr": "10.0.0.1", 00:14:57.833 "trsvcid": "44292" 00:14:57.833 }, 00:14:57.833 "auth": { 00:14:57.833 "state": "completed", 00:14:57.833 "digest": "sha256", 00:14:57.833 "dhgroup": "ffdhe8192" 00:14:57.833 } 00:14:57.833 } 00:14:57.833 ]' 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.833 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.090 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.090 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.090 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.090 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.090 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.348 17:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.280 17:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.538 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.472 00:15:00.472 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.472 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.472 17:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.472 { 00:15:00.472 "cntlid": 43, 00:15:00.472 "qid": 0, 00:15:00.472 "state": "enabled", 00:15:00.472 "thread": "nvmf_tgt_poll_group_000", 00:15:00.472 "listen_address": { 00:15:00.472 "trtype": "TCP", 00:15:00.472 "adrfam": "IPv4", 00:15:00.472 "traddr": "10.0.0.2", 00:15:00.472 "trsvcid": "4420" 00:15:00.472 }, 00:15:00.472 "peer_address": { 00:15:00.472 "trtype": "TCP", 00:15:00.472 "adrfam": "IPv4", 00:15:00.472 "traddr": "10.0.0.1", 00:15:00.472 "trsvcid": "44326" 00:15:00.472 }, 00:15:00.472 "auth": { 00:15:00.472 "state": "completed", 00:15:00.472 "digest": "sha256", 00:15:00.472 "dhgroup": "ffdhe8192" 00:15:00.472 } 00:15:00.472 } 00:15:00.472 ]' 00:15:00.472 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.730 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.988 17:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.928 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.184 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.185 17:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.115 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.115 { 00:15:03.115 "cntlid": 45, 00:15:03.115 "qid": 0, 00:15:03.115 "state": "enabled", 00:15:03.115 "thread": "nvmf_tgt_poll_group_000", 00:15:03.115 "listen_address": { 00:15:03.115 "trtype": "TCP", 00:15:03.115 "adrfam": "IPv4", 00:15:03.115 "traddr": "10.0.0.2", 00:15:03.115 "trsvcid": "4420" 00:15:03.115 }, 00:15:03.115 "peer_address": { 00:15:03.115 "trtype": "TCP", 00:15:03.115 "adrfam": "IPv4", 00:15:03.115 "traddr": "10.0.0.1", 00:15:03.115 "trsvcid": "37812" 00:15:03.115 }, 00:15:03.115 "auth": { 00:15:03.115 "state": "completed", 00:15:03.115 "digest": "sha256", 00:15:03.115 "dhgroup": "ffdhe8192" 00:15:03.115 } 00:15:03.115 } 00:15:03.115 ]' 00:15:03.115 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.372 17:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.653 17:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.582 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.839 17:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.771 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.771 { 00:15:05.771 "cntlid": 47, 00:15:05.771 "qid": 0, 00:15:05.771 "state": "enabled", 00:15:05.771 "thread": "nvmf_tgt_poll_group_000", 00:15:05.771 "listen_address": { 00:15:05.771 "trtype": "TCP", 00:15:05.771 "adrfam": "IPv4", 00:15:05.771 "traddr": "10.0.0.2", 00:15:05.771 "trsvcid": "4420" 00:15:05.771 }, 00:15:05.771 "peer_address": { 00:15:05.771 "trtype": "TCP", 00:15:05.771 "adrfam": "IPv4", 00:15:05.771 "traddr": "10.0.0.1", 00:15:05.771 "trsvcid": "37836" 00:15:05.771 }, 00:15:05.771 "auth": { 00:15:05.771 "state": "completed", 00:15:05.771 "digest": "sha256", 00:15:05.771 "dhgroup": "ffdhe8192" 00:15:05.771 } 00:15:05.771 } 00:15:05.771 ]' 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.771 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.029 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.029 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.029 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.029 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.029 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.286 17:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.220 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.478 17:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.736 00:15:07.736 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.736 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.736 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.994 { 00:15:07.994 "cntlid": 49, 00:15:07.994 "qid": 0, 00:15:07.994 "state": "enabled", 00:15:07.994 "thread": "nvmf_tgt_poll_group_000", 00:15:07.994 "listen_address": { 00:15:07.994 "trtype": "TCP", 00:15:07.994 "adrfam": "IPv4", 00:15:07.994 "traddr": "10.0.0.2", 00:15:07.994 "trsvcid": "4420" 00:15:07.994 }, 00:15:07.994 "peer_address": { 00:15:07.994 "trtype": "TCP", 00:15:07.994 "adrfam": "IPv4", 00:15:07.994 "traddr": "10.0.0.1", 00:15:07.994 "trsvcid": "37856" 00:15:07.994 }, 00:15:07.994 "auth": { 00:15:07.994 "state": "completed", 00:15:07.994 "digest": "sha384", 00:15:07.994 "dhgroup": "null" 00:15:07.994 } 00:15:07.994 } 00:15:07.994 ]' 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.994 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.559 17:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.492 17:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.492 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.057 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.057 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.057 { 00:15:10.057 "cntlid": 51, 00:15:10.057 "qid": 0, 00:15:10.057 "state": "enabled", 00:15:10.057 "thread": "nvmf_tgt_poll_group_000", 00:15:10.057 "listen_address": { 00:15:10.058 "trtype": "TCP", 00:15:10.058 "adrfam": "IPv4", 00:15:10.058 "traddr": "10.0.0.2", 00:15:10.058 "trsvcid": "4420" 00:15:10.058 }, 00:15:10.058 "peer_address": { 00:15:10.058 "trtype": "TCP", 00:15:10.058 "adrfam": "IPv4", 00:15:10.058 "traddr": "10.0.0.1", 00:15:10.058 "trsvcid": "37894" 00:15:10.058 }, 00:15:10.058 "auth": { 00:15:10.058 "state": "completed", 00:15:10.058 "digest": "sha384", 00:15:10.058 "dhgroup": "null" 00:15:10.058 } 00:15:10.058 } 00:15:10.058 ]' 00:15:10.058 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.315 17:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.572 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.503 17:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.760 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.017 00:15:12.017 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.017 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.017 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.274 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.274 { 00:15:12.274 "cntlid": 53, 00:15:12.274 "qid": 0, 00:15:12.274 "state": "enabled", 00:15:12.274 "thread": "nvmf_tgt_poll_group_000", 00:15:12.274 "listen_address": { 00:15:12.274 "trtype": "TCP", 00:15:12.274 "adrfam": "IPv4", 00:15:12.274 "traddr": "10.0.0.2", 00:15:12.274 "trsvcid": "4420" 00:15:12.274 }, 00:15:12.274 "peer_address": { 00:15:12.274 "trtype": "TCP", 00:15:12.274 "adrfam": "IPv4", 00:15:12.274 "traddr": "10.0.0.1", 00:15:12.274 "trsvcid": "50530" 00:15:12.274 }, 00:15:12.274 "auth": { 00:15:12.274 "state": "completed", 00:15:12.274 "digest": "sha384", 00:15:12.274 "dhgroup": "null" 00:15:12.274 } 00:15:12.274 } 00:15:12.274 ]' 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.275 17:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.532 17:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.463 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.719 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.976 00:15:13.976 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.976 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.976 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.233 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.234 { 00:15:14.234 "cntlid": 55, 00:15:14.234 "qid": 0, 00:15:14.234 "state": "enabled", 00:15:14.234 "thread": "nvmf_tgt_poll_group_000", 00:15:14.234 "listen_address": { 00:15:14.234 "trtype": "TCP", 00:15:14.234 "adrfam": "IPv4", 00:15:14.234 "traddr": "10.0.0.2", 00:15:14.234 "trsvcid": "4420" 00:15:14.234 }, 00:15:14.234 "peer_address": { 00:15:14.234 "trtype": "TCP", 00:15:14.234 "adrfam": "IPv4", 00:15:14.234 "traddr": "10.0.0.1", 00:15:14.234 "trsvcid": "50544" 00:15:14.234 }, 00:15:14.234 "auth": { 00:15:14.234 "state": "completed", 00:15:14.234 "digest": "sha384", 00:15:14.234 "dhgroup": "null" 00:15:14.234 } 00:15:14.234 } 00:15:14.234 ]' 00:15:14.234 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.491 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.491 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.491 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.491 17:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.491 17:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.491 17:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.491 17:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.749 17:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.738 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.996 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.255 00:15:16.255 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.255 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.255 17:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.512 { 00:15:16.512 "cntlid": 57, 00:15:16.512 "qid": 0, 00:15:16.512 "state": "enabled", 00:15:16.512 "thread": "nvmf_tgt_poll_group_000", 00:15:16.512 "listen_address": { 00:15:16.512 "trtype": "TCP", 00:15:16.512 "adrfam": "IPv4", 00:15:16.512 "traddr": "10.0.0.2", 00:15:16.512 "trsvcid": "4420" 00:15:16.512 }, 00:15:16.512 "peer_address": { 00:15:16.512 "trtype": "TCP", 00:15:16.512 "adrfam": "IPv4", 00:15:16.512 "traddr": "10.0.0.1", 00:15:16.512 "trsvcid": "50554" 00:15:16.512 }, 00:15:16.512 "auth": { 00:15:16.512 "state": "completed", 00:15:16.512 "digest": "sha384", 00:15:16.512 "dhgroup": "ffdhe2048" 00:15:16.512 } 00:15:16.512 } 00:15:16.512 ]' 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.512 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.768 17:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.697 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.954 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.211 00:15:18.211 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.211 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.211 17:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.468 { 00:15:18.468 "cntlid": 59, 00:15:18.468 "qid": 0, 00:15:18.468 "state": "enabled", 00:15:18.468 "thread": "nvmf_tgt_poll_group_000", 00:15:18.468 "listen_address": { 00:15:18.468 "trtype": "TCP", 00:15:18.468 "adrfam": "IPv4", 00:15:18.468 "traddr": "10.0.0.2", 00:15:18.468 "trsvcid": "4420" 00:15:18.468 }, 00:15:18.468 "peer_address": { 00:15:18.468 "trtype": "TCP", 00:15:18.468 "adrfam": "IPv4", 00:15:18.468 "traddr": "10.0.0.1", 00:15:18.468 "trsvcid": "50582" 00:15:18.468 }, 00:15:18.468 "auth": { 00:15:18.468 "state": "completed", 00:15:18.468 "digest": "sha384", 00:15:18.468 "dhgroup": "ffdhe2048" 00:15:18.468 } 00:15:18.468 } 00:15:18.468 ]' 00:15:18.468 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.724 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.981 17:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.912 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.169 17:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.169 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.169 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.426 00:15:20.426 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.426 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.426 17:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.683 { 00:15:20.683 "cntlid": 61, 00:15:20.683 "qid": 0, 00:15:20.683 "state": "enabled", 00:15:20.683 "thread": "nvmf_tgt_poll_group_000", 00:15:20.683 "listen_address": { 00:15:20.683 "trtype": "TCP", 00:15:20.683 "adrfam": "IPv4", 00:15:20.683 "traddr": "10.0.0.2", 00:15:20.683 "trsvcid": "4420" 00:15:20.683 }, 00:15:20.683 "peer_address": { 00:15:20.683 "trtype": "TCP", 00:15:20.683 "adrfam": "IPv4", 00:15:20.683 "traddr": "10.0.0.1", 00:15:20.683 "trsvcid": "50610" 00:15:20.683 }, 00:15:20.683 "auth": { 00:15:20.683 "state": "completed", 00:15:20.683 "digest": "sha384", 00:15:20.683 "dhgroup": "ffdhe2048" 00:15:20.683 } 00:15:20.683 } 00:15:20.683 ]' 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.683 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.940 17:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:21.869 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.126 17:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.384 00:15:22.384 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.384 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.384 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.641 { 00:15:22.641 "cntlid": 63, 00:15:22.641 "qid": 0, 00:15:22.641 "state": "enabled", 00:15:22.641 "thread": "nvmf_tgt_poll_group_000", 00:15:22.641 "listen_address": { 00:15:22.641 "trtype": "TCP", 00:15:22.641 "adrfam": "IPv4", 00:15:22.641 "traddr": "10.0.0.2", 00:15:22.641 "trsvcid": "4420" 00:15:22.641 }, 00:15:22.641 "peer_address": { 00:15:22.641 "trtype": "TCP", 00:15:22.641 "adrfam": "IPv4", 00:15:22.641 "traddr": "10.0.0.1", 00:15:22.641 "trsvcid": "59092" 00:15:22.641 }, 00:15:22.641 "auth": { 00:15:22.641 "state": "completed", 00:15:22.641 "digest": "sha384", 00:15:22.641 "dhgroup": "ffdhe2048" 00:15:22.641 } 00:15:22.641 } 00:15:22.641 ]' 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.641 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.642 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.899 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.899 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.899 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.899 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.899 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.157 17:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.090 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.347 17:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.604 00:15:24.604 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.604 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.604 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.861 { 00:15:24.861 "cntlid": 65, 00:15:24.861 "qid": 0, 00:15:24.861 "state": "enabled", 00:15:24.861 "thread": "nvmf_tgt_poll_group_000", 00:15:24.861 "listen_address": { 00:15:24.861 "trtype": "TCP", 00:15:24.861 "adrfam": "IPv4", 00:15:24.861 "traddr": "10.0.0.2", 00:15:24.861 "trsvcid": "4420" 00:15:24.861 }, 00:15:24.861 "peer_address": { 00:15:24.861 "trtype": "TCP", 00:15:24.861 "adrfam": "IPv4", 00:15:24.861 "traddr": "10.0.0.1", 00:15:24.861 "trsvcid": "59108" 00:15:24.861 }, 00:15:24.861 "auth": { 00:15:24.861 "state": "completed", 00:15:24.861 "digest": "sha384", 00:15:24.861 "dhgroup": "ffdhe3072" 00:15:24.861 } 00:15:24.861 } 00:15:24.861 ]' 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.861 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.118 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.118 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.118 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.375 17:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.304 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.305 17:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.872 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.872 17:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.130 { 00:15:27.130 "cntlid": 67, 00:15:27.130 "qid": 0, 00:15:27.130 "state": "enabled", 00:15:27.130 "thread": "nvmf_tgt_poll_group_000", 00:15:27.130 "listen_address": { 00:15:27.130 "trtype": "TCP", 00:15:27.130 "adrfam": "IPv4", 00:15:27.130 "traddr": "10.0.0.2", 00:15:27.130 "trsvcid": "4420" 00:15:27.130 }, 00:15:27.130 "peer_address": { 00:15:27.130 "trtype": "TCP", 00:15:27.130 "adrfam": "IPv4", 00:15:27.130 "traddr": "10.0.0.1", 00:15:27.130 "trsvcid": "59146" 00:15:27.130 }, 00:15:27.130 "auth": { 00:15:27.130 "state": "completed", 00:15:27.130 "digest": "sha384", 00:15:27.130 "dhgroup": "ffdhe3072" 00:15:27.130 } 00:15:27.130 } 00:15:27.130 ]' 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.130 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.387 17:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.319 17:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.576 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.141 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.141 { 00:15:29.141 "cntlid": 69, 00:15:29.141 "qid": 0, 00:15:29.141 "state": "enabled", 00:15:29.141 "thread": "nvmf_tgt_poll_group_000", 00:15:29.141 "listen_address": { 00:15:29.141 "trtype": "TCP", 00:15:29.141 "adrfam": "IPv4", 00:15:29.141 "traddr": "10.0.0.2", 00:15:29.141 "trsvcid": "4420" 00:15:29.141 }, 00:15:29.141 "peer_address": { 00:15:29.141 "trtype": "TCP", 00:15:29.141 "adrfam": "IPv4", 00:15:29.141 "traddr": "10.0.0.1", 00:15:29.141 "trsvcid": "59162" 00:15:29.141 }, 00:15:29.141 "auth": { 00:15:29.141 "state": "completed", 00:15:29.141 "digest": "sha384", 00:15:29.141 "dhgroup": "ffdhe3072" 00:15:29.141 } 00:15:29.141 } 00:15:29.141 ]' 00:15:29.141 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.399 17:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.656 17:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.589 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:30.846 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.847 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.847 17:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.847 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.847 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.103 00:15:31.103 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.103 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.103 17:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.359 { 00:15:31.359 "cntlid": 71, 00:15:31.359 "qid": 0, 00:15:31.359 "state": "enabled", 00:15:31.359 "thread": "nvmf_tgt_poll_group_000", 00:15:31.359 "listen_address": { 00:15:31.359 "trtype": "TCP", 00:15:31.359 "adrfam": "IPv4", 00:15:31.359 "traddr": "10.0.0.2", 00:15:31.359 "trsvcid": "4420" 00:15:31.359 }, 00:15:31.359 "peer_address": { 00:15:31.359 "trtype": "TCP", 00:15:31.359 "adrfam": "IPv4", 00:15:31.359 "traddr": "10.0.0.1", 00:15:31.359 "trsvcid": "57494" 00:15:31.359 }, 00:15:31.359 "auth": { 00:15:31.359 "state": "completed", 00:15:31.359 "digest": "sha384", 00:15:31.359 "dhgroup": "ffdhe3072" 00:15:31.359 } 00:15:31.359 } 00:15:31.359 ]' 00:15:31.359 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.615 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.872 17:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.801 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.058 17:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.622 00:15:33.622 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.622 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.622 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.879 { 00:15:33.879 "cntlid": 73, 00:15:33.879 "qid": 0, 00:15:33.879 "state": "enabled", 00:15:33.879 "thread": "nvmf_tgt_poll_group_000", 00:15:33.879 "listen_address": { 00:15:33.879 "trtype": "TCP", 00:15:33.879 "adrfam": "IPv4", 00:15:33.879 "traddr": "10.0.0.2", 00:15:33.879 "trsvcid": "4420" 00:15:33.879 }, 00:15:33.879 "peer_address": { 00:15:33.879 "trtype": "TCP", 00:15:33.879 "adrfam": "IPv4", 00:15:33.879 "traddr": "10.0.0.1", 00:15:33.879 "trsvcid": "57526" 00:15:33.879 }, 00:15:33.879 "auth": { 00:15:33.879 "state": "completed", 00:15:33.879 "digest": "sha384", 00:15:33.879 "dhgroup": "ffdhe4096" 00:15:33.879 } 00:15:33.879 } 00:15:33.879 ]' 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.879 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.136 17:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.067 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.325 17:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.582 00:15:35.839 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.839 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.839 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.096 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.096 { 00:15:36.096 "cntlid": 75, 00:15:36.096 "qid": 0, 00:15:36.097 "state": "enabled", 00:15:36.097 "thread": "nvmf_tgt_poll_group_000", 00:15:36.097 "listen_address": { 00:15:36.097 "trtype": "TCP", 00:15:36.097 "adrfam": "IPv4", 00:15:36.097 "traddr": "10.0.0.2", 00:15:36.097 "trsvcid": "4420" 00:15:36.097 }, 00:15:36.097 "peer_address": { 00:15:36.097 "trtype": "TCP", 00:15:36.097 "adrfam": "IPv4", 00:15:36.097 "traddr": "10.0.0.1", 00:15:36.097 "trsvcid": "57570" 00:15:36.097 }, 00:15:36.097 "auth": { 00:15:36.097 "state": "completed", 00:15:36.097 "digest": "sha384", 00:15:36.097 "dhgroup": "ffdhe4096" 00:15:36.097 } 00:15:36.097 } 00:15:36.097 ]' 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.097 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.354 17:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.317 17:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.574 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:37.574 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.574 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.575 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.833 00:15:37.833 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.833 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.833 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.091 { 00:15:38.091 "cntlid": 77, 00:15:38.091 "qid": 0, 00:15:38.091 "state": "enabled", 00:15:38.091 "thread": "nvmf_tgt_poll_group_000", 00:15:38.091 "listen_address": { 00:15:38.091 "trtype": "TCP", 00:15:38.091 "adrfam": "IPv4", 00:15:38.091 "traddr": "10.0.0.2", 00:15:38.091 "trsvcid": "4420" 00:15:38.091 }, 00:15:38.091 "peer_address": { 00:15:38.091 "trtype": "TCP", 00:15:38.091 "adrfam": "IPv4", 00:15:38.091 "traddr": "10.0.0.1", 00:15:38.091 "trsvcid": "57608" 00:15:38.091 }, 00:15:38.091 "auth": { 00:15:38.091 "state": "completed", 00:15:38.091 "digest": "sha384", 00:15:38.091 "dhgroup": "ffdhe4096" 00:15:38.091 } 00:15:38.091 } 00:15:38.091 ]' 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.091 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.348 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:38.348 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.348 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.349 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.349 17:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.606 17:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.540 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.798 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.364 00:15:40.364 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.364 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.364 17:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.364 { 00:15:40.364 "cntlid": 79, 00:15:40.364 "qid": 0, 00:15:40.364 "state": "enabled", 00:15:40.364 "thread": "nvmf_tgt_poll_group_000", 00:15:40.364 "listen_address": { 00:15:40.364 "trtype": "TCP", 00:15:40.364 "adrfam": "IPv4", 00:15:40.364 "traddr": "10.0.0.2", 00:15:40.364 "trsvcid": "4420" 00:15:40.364 }, 00:15:40.364 "peer_address": { 00:15:40.364 "trtype": "TCP", 00:15:40.364 "adrfam": "IPv4", 00:15:40.364 "traddr": "10.0.0.1", 00:15:40.364 "trsvcid": "57632" 00:15:40.364 }, 00:15:40.364 "auth": { 00:15:40.364 "state": "completed", 00:15:40.364 "digest": "sha384", 00:15:40.364 "dhgroup": "ffdhe4096" 00:15:40.364 } 00:15:40.364 } 00:15:40.364 ]' 00:15:40.364 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.622 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.879 17:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:41.811 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.811 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:41.812 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.069 17:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.635 00:15:42.635 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.635 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.635 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.893 { 00:15:42.893 "cntlid": 81, 00:15:42.893 "qid": 0, 00:15:42.893 "state": "enabled", 00:15:42.893 "thread": "nvmf_tgt_poll_group_000", 00:15:42.893 "listen_address": { 00:15:42.893 "trtype": "TCP", 00:15:42.893 "adrfam": "IPv4", 00:15:42.893 "traddr": "10.0.0.2", 00:15:42.893 "trsvcid": "4420" 00:15:42.893 }, 00:15:42.893 "peer_address": { 00:15:42.893 "trtype": "TCP", 00:15:42.893 "adrfam": "IPv4", 00:15:42.893 "traddr": "10.0.0.1", 00:15:42.893 "trsvcid": "39154" 00:15:42.893 }, 00:15:42.893 "auth": { 00:15:42.893 "state": "completed", 00:15:42.893 "digest": "sha384", 00:15:42.893 "dhgroup": "ffdhe6144" 00:15:42.893 } 00:15:42.893 } 00:15:42.893 ]' 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.893 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.151 17:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.081 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.339 17:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.904 00:15:44.904 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.904 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.904 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.163 { 00:15:45.163 "cntlid": 83, 00:15:45.163 "qid": 0, 00:15:45.163 "state": "enabled", 00:15:45.163 "thread": "nvmf_tgt_poll_group_000", 00:15:45.163 "listen_address": { 00:15:45.163 "trtype": "TCP", 00:15:45.163 "adrfam": "IPv4", 00:15:45.163 "traddr": "10.0.0.2", 00:15:45.163 "trsvcid": "4420" 00:15:45.163 }, 00:15:45.163 "peer_address": { 00:15:45.163 "trtype": "TCP", 00:15:45.163 "adrfam": "IPv4", 00:15:45.163 "traddr": "10.0.0.1", 00:15:45.163 "trsvcid": "39180" 00:15:45.163 }, 00:15:45.163 "auth": { 00:15:45.163 "state": "completed", 00:15:45.163 "digest": "sha384", 00:15:45.163 "dhgroup": "ffdhe6144" 00:15:45.163 } 00:15:45.163 } 00:15:45.163 ]' 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.163 17:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.421 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:46.354 17:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.612 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.177 00:15:47.177 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.177 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.177 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.434 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.434 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.435 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.435 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.435 17:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.435 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.435 { 00:15:47.435 "cntlid": 85, 00:15:47.435 "qid": 0, 00:15:47.435 "state": "enabled", 00:15:47.435 "thread": "nvmf_tgt_poll_group_000", 00:15:47.435 "listen_address": { 00:15:47.435 "trtype": "TCP", 00:15:47.435 "adrfam": "IPv4", 00:15:47.435 "traddr": "10.0.0.2", 00:15:47.435 "trsvcid": "4420" 00:15:47.435 }, 00:15:47.435 "peer_address": { 00:15:47.435 "trtype": "TCP", 00:15:47.435 "adrfam": "IPv4", 00:15:47.435 "traddr": "10.0.0.1", 00:15:47.435 "trsvcid": "39198" 00:15:47.435 }, 00:15:47.435 "auth": { 00:15:47.435 "state": "completed", 00:15:47.435 "digest": "sha384", 00:15:47.435 "dhgroup": "ffdhe6144" 00:15:47.435 } 00:15:47.435 } 00:15:47.435 ]' 00:15:47.435 17:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.435 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.692 17:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.625 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.883 17:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.450 00:15:49.450 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.450 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.450 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.708 { 00:15:49.708 "cntlid": 87, 00:15:49.708 "qid": 0, 00:15:49.708 "state": "enabled", 00:15:49.708 "thread": "nvmf_tgt_poll_group_000", 00:15:49.708 "listen_address": { 00:15:49.708 "trtype": "TCP", 00:15:49.708 "adrfam": "IPv4", 00:15:49.708 "traddr": "10.0.0.2", 00:15:49.708 "trsvcid": "4420" 00:15:49.708 }, 00:15:49.708 "peer_address": { 00:15:49.708 "trtype": "TCP", 00:15:49.708 "adrfam": "IPv4", 00:15:49.708 "traddr": "10.0.0.1", 00:15:49.708 "trsvcid": "39226" 00:15:49.708 }, 00:15:49.708 "auth": { 00:15:49.708 "state": "completed", 00:15:49.708 "digest": "sha384", 00:15:49.708 "dhgroup": "ffdhe6144" 00:15:49.708 } 00:15:49.708 } 00:15:49.708 ]' 00:15:49.708 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.966 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.223 17:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:15:51.156 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.156 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:51.156 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.156 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.156 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.157 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.157 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.157 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:51.157 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.414 17:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.342 00:15:52.342 17:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.342 17:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.342 17:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.342 { 00:15:52.342 "cntlid": 89, 00:15:52.342 "qid": 0, 00:15:52.342 "state": "enabled", 00:15:52.342 "thread": "nvmf_tgt_poll_group_000", 00:15:52.342 "listen_address": { 00:15:52.342 "trtype": "TCP", 00:15:52.342 "adrfam": "IPv4", 00:15:52.342 "traddr": "10.0.0.2", 00:15:52.342 "trsvcid": "4420" 00:15:52.342 }, 00:15:52.342 "peer_address": { 00:15:52.342 "trtype": "TCP", 00:15:52.342 "adrfam": "IPv4", 00:15:52.342 "traddr": "10.0.0.1", 00:15:52.342 "trsvcid": "33788" 00:15:52.342 }, 00:15:52.342 "auth": { 00:15:52.342 "state": "completed", 00:15:52.342 "digest": "sha384", 00:15:52.342 "dhgroup": "ffdhe8192" 00:15:52.342 } 00:15:52.342 } 00:15:52.342 ]' 00:15:52.342 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.598 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.854 17:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.785 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.042 17:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.973 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.973 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.973 { 00:15:54.973 "cntlid": 91, 00:15:54.973 "qid": 0, 00:15:54.973 "state": "enabled", 00:15:54.973 "thread": "nvmf_tgt_poll_group_000", 00:15:54.973 "listen_address": { 00:15:54.973 "trtype": "TCP", 00:15:54.973 "adrfam": "IPv4", 00:15:54.973 "traddr": "10.0.0.2", 00:15:54.973 "trsvcid": "4420" 00:15:54.973 }, 00:15:54.973 "peer_address": { 00:15:54.973 "trtype": "TCP", 00:15:54.973 "adrfam": "IPv4", 00:15:54.974 "traddr": "10.0.0.1", 00:15:54.974 "trsvcid": "33808" 00:15:54.974 }, 00:15:54.974 "auth": { 00:15:54.974 "state": "completed", 00:15:54.974 "digest": "sha384", 00:15:54.974 "dhgroup": "ffdhe8192" 00:15:54.974 } 00:15:54.974 } 00:15:54.974 ]' 00:15:54.974 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.231 17:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.488 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:56.419 17:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.675 17:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.605 00:15:57.605 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.605 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.605 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.863 { 00:15:57.863 "cntlid": 93, 00:15:57.863 "qid": 0, 00:15:57.863 "state": "enabled", 00:15:57.863 "thread": "nvmf_tgt_poll_group_000", 00:15:57.863 "listen_address": { 00:15:57.863 "trtype": "TCP", 00:15:57.863 "adrfam": "IPv4", 00:15:57.863 "traddr": "10.0.0.2", 00:15:57.863 "trsvcid": "4420" 00:15:57.863 }, 00:15:57.863 "peer_address": { 00:15:57.863 "trtype": "TCP", 00:15:57.863 "adrfam": "IPv4", 00:15:57.863 "traddr": "10.0.0.1", 00:15:57.863 "trsvcid": "33838" 00:15:57.863 }, 00:15:57.863 "auth": { 00:15:57.863 "state": "completed", 00:15:57.863 "digest": "sha384", 00:15:57.863 "dhgroup": "ffdhe8192" 00:15:57.863 } 00:15:57.863 } 00:15:57.863 ]' 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.863 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.120 17:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.107 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.364 17:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.982 00:16:00.239 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.239 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.239 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.495 { 00:16:00.495 "cntlid": 95, 00:16:00.495 "qid": 0, 00:16:00.495 "state": "enabled", 00:16:00.495 "thread": "nvmf_tgt_poll_group_000", 00:16:00.495 "listen_address": { 00:16:00.495 "trtype": "TCP", 00:16:00.495 "adrfam": "IPv4", 00:16:00.495 "traddr": "10.0.0.2", 00:16:00.495 "trsvcid": "4420" 00:16:00.495 }, 00:16:00.495 "peer_address": { 00:16:00.495 "trtype": "TCP", 00:16:00.495 "adrfam": "IPv4", 00:16:00.495 "traddr": "10.0.0.1", 00:16:00.495 "trsvcid": "33868" 00:16:00.495 }, 00:16:00.495 "auth": { 00:16:00.495 "state": "completed", 00:16:00.495 "digest": "sha384", 00:16:00.495 "dhgroup": "ffdhe8192" 00:16:00.495 } 00:16:00.495 } 00:16:00.495 ]' 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.495 17:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.495 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.495 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.495 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.495 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.495 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.752 17:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:01.682 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.940 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.197 00:16:02.197 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.197 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.197 17:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.454 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.454 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.454 17:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.454 17:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.454 17:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.712 { 00:16:02.712 "cntlid": 97, 00:16:02.712 "qid": 0, 00:16:02.712 "state": "enabled", 00:16:02.712 "thread": "nvmf_tgt_poll_group_000", 00:16:02.712 "listen_address": { 00:16:02.712 "trtype": "TCP", 00:16:02.712 "adrfam": "IPv4", 00:16:02.712 "traddr": "10.0.0.2", 00:16:02.712 "trsvcid": "4420" 00:16:02.712 }, 00:16:02.712 "peer_address": { 00:16:02.712 "trtype": "TCP", 00:16:02.712 "adrfam": "IPv4", 00:16:02.712 "traddr": "10.0.0.1", 00:16:02.712 "trsvcid": "53378" 00:16:02.712 }, 00:16:02.712 "auth": { 00:16:02.712 "state": "completed", 00:16:02.712 "digest": "sha512", 00:16:02.712 "dhgroup": "null" 00:16:02.712 } 00:16:02.712 } 00:16:02.712 ]' 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.712 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.969 17:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.901 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.159 17:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.416 00:16:04.416 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.416 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.416 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.676 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.676 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.933 17:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.933 17:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.933 17:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.933 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.933 { 00:16:04.933 "cntlid": 99, 00:16:04.933 "qid": 0, 00:16:04.933 "state": "enabled", 00:16:04.933 "thread": "nvmf_tgt_poll_group_000", 00:16:04.933 "listen_address": { 00:16:04.933 "trtype": "TCP", 00:16:04.934 "adrfam": "IPv4", 00:16:04.934 "traddr": "10.0.0.2", 00:16:04.934 "trsvcid": "4420" 00:16:04.934 }, 00:16:04.934 "peer_address": { 00:16:04.934 "trtype": "TCP", 00:16:04.934 "adrfam": "IPv4", 00:16:04.934 "traddr": "10.0.0.1", 00:16:04.934 "trsvcid": "53398" 00:16:04.934 }, 00:16:04.934 "auth": { 00:16:04.934 "state": "completed", 00:16:04.934 "digest": "sha512", 00:16:04.934 "dhgroup": "null" 00:16:04.934 } 00:16:04.934 } 00:16:04.934 ]' 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.934 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.191 17:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.123 17:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.380 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.945 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.945 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.203 17:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.203 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.203 { 00:16:07.203 "cntlid": 101, 00:16:07.203 "qid": 0, 00:16:07.203 "state": "enabled", 00:16:07.203 "thread": "nvmf_tgt_poll_group_000", 00:16:07.203 "listen_address": { 00:16:07.203 "trtype": "TCP", 00:16:07.203 "adrfam": "IPv4", 00:16:07.203 "traddr": "10.0.0.2", 00:16:07.203 "trsvcid": "4420" 00:16:07.203 }, 00:16:07.203 "peer_address": { 00:16:07.203 "trtype": "TCP", 00:16:07.203 "adrfam": "IPv4", 00:16:07.203 "traddr": "10.0.0.1", 00:16:07.203 "trsvcid": "53422" 00:16:07.203 }, 00:16:07.204 "auth": { 00:16:07.204 "state": "completed", 00:16:07.204 "digest": "sha512", 00:16:07.204 "dhgroup": "null" 00:16:07.204 } 00:16:07.204 } 00:16:07.204 ]' 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.204 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.461 17:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:08.392 17:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.392 17:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.393 17:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.650 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.908 00:16:08.908 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.908 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.908 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.166 { 00:16:09.166 "cntlid": 103, 00:16:09.166 "qid": 0, 00:16:09.166 "state": "enabled", 00:16:09.166 "thread": "nvmf_tgt_poll_group_000", 00:16:09.166 "listen_address": { 00:16:09.166 "trtype": "TCP", 00:16:09.166 "adrfam": "IPv4", 00:16:09.166 "traddr": "10.0.0.2", 00:16:09.166 "trsvcid": "4420" 00:16:09.166 }, 00:16:09.166 "peer_address": { 00:16:09.166 "trtype": "TCP", 00:16:09.166 "adrfam": "IPv4", 00:16:09.166 "traddr": "10.0.0.1", 00:16:09.166 "trsvcid": "53454" 00:16:09.166 }, 00:16:09.166 "auth": { 00:16:09.166 "state": "completed", 00:16:09.166 "digest": "sha512", 00:16:09.166 "dhgroup": "null" 00:16:09.166 } 00:16:09.166 } 00:16:09.166 ]' 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:09.166 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.423 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.423 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.423 17:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.423 17:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:10.353 17:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.353 17:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:10.353 17:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.353 17:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.353 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.353 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.353 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.353 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.353 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.611 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.177 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.177 { 00:16:11.177 "cntlid": 105, 00:16:11.177 "qid": 0, 00:16:11.177 "state": "enabled", 00:16:11.177 "thread": "nvmf_tgt_poll_group_000", 00:16:11.177 "listen_address": { 00:16:11.177 "trtype": "TCP", 00:16:11.177 "adrfam": "IPv4", 00:16:11.177 "traddr": "10.0.0.2", 00:16:11.177 "trsvcid": "4420" 00:16:11.177 }, 00:16:11.177 "peer_address": { 00:16:11.177 "trtype": "TCP", 00:16:11.177 "adrfam": "IPv4", 00:16:11.177 "traddr": "10.0.0.1", 00:16:11.177 "trsvcid": "43456" 00:16:11.177 }, 00:16:11.177 "auth": { 00:16:11.177 "state": "completed", 00:16:11.177 "digest": "sha512", 00:16:11.177 "dhgroup": "ffdhe2048" 00:16:11.177 } 00:16:11.177 } 00:16:11.177 ]' 00:16:11.177 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.434 17:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.691 17:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:12.623 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.624 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.881 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.139 00:16:13.139 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.139 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.139 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.404 { 00:16:13.404 "cntlid": 107, 00:16:13.404 "qid": 0, 00:16:13.404 "state": "enabled", 00:16:13.404 "thread": "nvmf_tgt_poll_group_000", 00:16:13.404 "listen_address": { 00:16:13.404 "trtype": "TCP", 00:16:13.404 "adrfam": "IPv4", 00:16:13.404 "traddr": "10.0.0.2", 00:16:13.404 "trsvcid": "4420" 00:16:13.404 }, 00:16:13.404 "peer_address": { 00:16:13.404 "trtype": "TCP", 00:16:13.404 "adrfam": "IPv4", 00:16:13.404 "traddr": "10.0.0.1", 00:16:13.404 "trsvcid": "43480" 00:16:13.404 }, 00:16:13.404 "auth": { 00:16:13.404 "state": "completed", 00:16:13.404 "digest": "sha512", 00:16:13.404 "dhgroup": "ffdhe2048" 00:16:13.404 } 00:16:13.404 } 00:16:13.404 ]' 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.404 17:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.404 17:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.404 17:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.404 17:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.662 17:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.594 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.852 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.110 00:16:15.110 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.110 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.110 17:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.368 { 00:16:15.368 "cntlid": 109, 00:16:15.368 "qid": 0, 00:16:15.368 "state": "enabled", 00:16:15.368 "thread": "nvmf_tgt_poll_group_000", 00:16:15.368 "listen_address": { 00:16:15.368 "trtype": "TCP", 00:16:15.368 "adrfam": "IPv4", 00:16:15.368 "traddr": "10.0.0.2", 00:16:15.368 "trsvcid": "4420" 00:16:15.368 }, 00:16:15.368 "peer_address": { 00:16:15.368 "trtype": "TCP", 00:16:15.368 "adrfam": "IPv4", 00:16:15.368 "traddr": "10.0.0.1", 00:16:15.368 "trsvcid": "43510" 00:16:15.368 }, 00:16:15.368 "auth": { 00:16:15.368 "state": "completed", 00:16:15.368 "digest": "sha512", 00:16:15.368 "dhgroup": "ffdhe2048" 00:16:15.368 } 00:16:15.368 } 00:16:15.368 ]' 00:16:15.368 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.625 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.882 17:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:16.814 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.814 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.815 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.072 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.330 00:16:17.330 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.330 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.330 17:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.588 { 00:16:17.588 "cntlid": 111, 00:16:17.588 "qid": 0, 00:16:17.588 "state": "enabled", 00:16:17.588 "thread": "nvmf_tgt_poll_group_000", 00:16:17.588 "listen_address": { 00:16:17.588 "trtype": "TCP", 00:16:17.588 "adrfam": "IPv4", 00:16:17.588 "traddr": "10.0.0.2", 00:16:17.588 "trsvcid": "4420" 00:16:17.588 }, 00:16:17.588 "peer_address": { 00:16:17.588 "trtype": "TCP", 00:16:17.588 "adrfam": "IPv4", 00:16:17.588 "traddr": "10.0.0.1", 00:16:17.588 "trsvcid": "43544" 00:16:17.588 }, 00:16:17.588 "auth": { 00:16:17.588 "state": "completed", 00:16:17.588 "digest": "sha512", 00:16:17.588 "dhgroup": "ffdhe2048" 00:16:17.588 } 00:16:17.588 } 00:16:17.588 ]' 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.588 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.846 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.846 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.846 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.103 17:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.036 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.293 17:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.582 00:16:19.582 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.582 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.582 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.884 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.884 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.884 17:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.884 17:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.885 { 00:16:19.885 "cntlid": 113, 00:16:19.885 "qid": 0, 00:16:19.885 "state": "enabled", 00:16:19.885 "thread": "nvmf_tgt_poll_group_000", 00:16:19.885 "listen_address": { 00:16:19.885 "trtype": "TCP", 00:16:19.885 "adrfam": "IPv4", 00:16:19.885 "traddr": "10.0.0.2", 00:16:19.885 "trsvcid": "4420" 00:16:19.885 }, 00:16:19.885 "peer_address": { 00:16:19.885 "trtype": "TCP", 00:16:19.885 "adrfam": "IPv4", 00:16:19.885 "traddr": "10.0.0.1", 00:16:19.885 "trsvcid": "43584" 00:16:19.885 }, 00:16:19.885 "auth": { 00:16:19.885 "state": "completed", 00:16:19.885 "digest": "sha512", 00:16:19.885 "dhgroup": "ffdhe3072" 00:16:19.885 } 00:16:19.885 } 00:16:19.885 ]' 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.885 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.143 17:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.075 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.333 17:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.898 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.898 { 00:16:21.898 "cntlid": 115, 00:16:21.898 "qid": 0, 00:16:21.898 "state": "enabled", 00:16:21.898 "thread": "nvmf_tgt_poll_group_000", 00:16:21.898 "listen_address": { 00:16:21.898 "trtype": "TCP", 00:16:21.898 "adrfam": "IPv4", 00:16:21.898 "traddr": "10.0.0.2", 00:16:21.898 "trsvcid": "4420" 00:16:21.898 }, 00:16:21.898 "peer_address": { 00:16:21.898 "trtype": "TCP", 00:16:21.898 "adrfam": "IPv4", 00:16:21.898 "traddr": "10.0.0.1", 00:16:21.898 "trsvcid": "33266" 00:16:21.898 }, 00:16:21.898 "auth": { 00:16:21.898 "state": "completed", 00:16:21.898 "digest": "sha512", 00:16:21.898 "dhgroup": "ffdhe3072" 00:16:21.898 } 00:16:21.898 } 00:16:21.898 ]' 00:16:21.898 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.156 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.413 17:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.357 17:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.616 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.873 00:16:23.873 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.873 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.873 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.131 { 00:16:24.131 "cntlid": 117, 00:16:24.131 "qid": 0, 00:16:24.131 "state": "enabled", 00:16:24.131 "thread": "nvmf_tgt_poll_group_000", 00:16:24.131 "listen_address": { 00:16:24.131 "trtype": "TCP", 00:16:24.131 "adrfam": "IPv4", 00:16:24.131 "traddr": "10.0.0.2", 00:16:24.131 "trsvcid": "4420" 00:16:24.131 }, 00:16:24.131 "peer_address": { 00:16:24.131 "trtype": "TCP", 00:16:24.131 "adrfam": "IPv4", 00:16:24.131 "traddr": "10.0.0.1", 00:16:24.131 "trsvcid": "33306" 00:16:24.131 }, 00:16:24.131 "auth": { 00:16:24.131 "state": "completed", 00:16:24.131 "digest": "sha512", 00:16:24.131 "dhgroup": "ffdhe3072" 00:16:24.131 } 00:16:24.131 } 00:16:24.131 ]' 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.131 17:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.389 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.321 17:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.579 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.836 00:16:25.836 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.836 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.836 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.092 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.092 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.092 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.092 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.350 { 00:16:26.350 "cntlid": 119, 00:16:26.350 "qid": 0, 00:16:26.350 "state": "enabled", 00:16:26.350 "thread": "nvmf_tgt_poll_group_000", 00:16:26.350 "listen_address": { 00:16:26.350 "trtype": "TCP", 00:16:26.350 "adrfam": "IPv4", 00:16:26.350 "traddr": "10.0.0.2", 00:16:26.350 "trsvcid": "4420" 00:16:26.350 }, 00:16:26.350 "peer_address": { 00:16:26.350 "trtype": "TCP", 00:16:26.350 "adrfam": "IPv4", 00:16:26.350 "traddr": "10.0.0.1", 00:16:26.350 "trsvcid": "33332" 00:16:26.350 }, 00:16:26.350 "auth": { 00:16:26.350 "state": "completed", 00:16:26.350 "digest": "sha512", 00:16:26.350 "dhgroup": "ffdhe3072" 00:16:26.350 } 00:16:26.350 } 00:16:26.350 ]' 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.350 17:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.607 17:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.540 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.797 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.054 00:16:28.054 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.054 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.054 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.311 { 00:16:28.311 "cntlid": 121, 00:16:28.311 "qid": 0, 00:16:28.311 "state": "enabled", 00:16:28.311 "thread": "nvmf_tgt_poll_group_000", 00:16:28.311 "listen_address": { 00:16:28.311 "trtype": "TCP", 00:16:28.311 "adrfam": "IPv4", 00:16:28.311 "traddr": "10.0.0.2", 00:16:28.311 "trsvcid": "4420" 00:16:28.311 }, 00:16:28.311 "peer_address": { 00:16:28.311 "trtype": "TCP", 00:16:28.311 "adrfam": "IPv4", 00:16:28.311 "traddr": "10.0.0.1", 00:16:28.311 "trsvcid": "33348" 00:16:28.311 }, 00:16:28.311 "auth": { 00:16:28.311 "state": "completed", 00:16:28.311 "digest": "sha512", 00:16:28.311 "dhgroup": "ffdhe4096" 00:16:28.311 } 00:16:28.311 } 00:16:28.311 ]' 00:16:28.311 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.312 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.312 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.312 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.312 17:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.567 17:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.567 17:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.567 17:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.823 17:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.752 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.033 17:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.033 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.033 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.289 00:16:30.289 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.289 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.289 17:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.547 { 00:16:30.547 "cntlid": 123, 00:16:30.547 "qid": 0, 00:16:30.547 "state": "enabled", 00:16:30.547 "thread": "nvmf_tgt_poll_group_000", 00:16:30.547 "listen_address": { 00:16:30.547 "trtype": "TCP", 00:16:30.547 "adrfam": "IPv4", 00:16:30.547 "traddr": "10.0.0.2", 00:16:30.547 "trsvcid": "4420" 00:16:30.547 }, 00:16:30.547 "peer_address": { 00:16:30.547 "trtype": "TCP", 00:16:30.547 "adrfam": "IPv4", 00:16:30.547 "traddr": "10.0.0.1", 00:16:30.547 "trsvcid": "33372" 00:16:30.547 }, 00:16:30.547 "auth": { 00:16:30.547 "state": "completed", 00:16:30.547 "digest": "sha512", 00:16:30.547 "dhgroup": "ffdhe4096" 00:16:30.547 } 00:16:30.547 } 00:16:30.547 ]' 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.547 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.804 17:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:31.767 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.767 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.767 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.767 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.768 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.768 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.768 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.768 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.025 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.282 00:16:32.282 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.282 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.282 17:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.554 { 00:16:32.554 "cntlid": 125, 00:16:32.554 "qid": 0, 00:16:32.554 "state": "enabled", 00:16:32.554 "thread": "nvmf_tgt_poll_group_000", 00:16:32.554 "listen_address": { 00:16:32.554 "trtype": "TCP", 00:16:32.554 "adrfam": "IPv4", 00:16:32.554 "traddr": "10.0.0.2", 00:16:32.554 "trsvcid": "4420" 00:16:32.554 }, 00:16:32.554 "peer_address": { 00:16:32.554 "trtype": "TCP", 00:16:32.554 "adrfam": "IPv4", 00:16:32.554 "traddr": "10.0.0.1", 00:16:32.554 "trsvcid": "37666" 00:16:32.554 }, 00:16:32.554 "auth": { 00:16:32.554 "state": "completed", 00:16:32.554 "digest": "sha512", 00:16:32.554 "dhgroup": "ffdhe4096" 00:16:32.554 } 00:16:32.554 } 00:16:32.554 ]' 00:16:32.554 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.810 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.811 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.067 17:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.998 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.255 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:34.255 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.255 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.255 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.256 17:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.513 00:16:34.513 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.513 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.513 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.771 { 00:16:34.771 "cntlid": 127, 00:16:34.771 "qid": 0, 00:16:34.771 "state": "enabled", 00:16:34.771 "thread": "nvmf_tgt_poll_group_000", 00:16:34.771 "listen_address": { 00:16:34.771 "trtype": "TCP", 00:16:34.771 "adrfam": "IPv4", 00:16:34.771 "traddr": "10.0.0.2", 00:16:34.771 "trsvcid": "4420" 00:16:34.771 }, 00:16:34.771 "peer_address": { 00:16:34.771 "trtype": "TCP", 00:16:34.771 "adrfam": "IPv4", 00:16:34.771 "traddr": "10.0.0.1", 00:16:34.771 "trsvcid": "37696" 00:16:34.771 }, 00:16:34.771 "auth": { 00:16:34.771 "state": "completed", 00:16:34.771 "digest": "sha512", 00:16:34.771 "dhgroup": "ffdhe4096" 00:16:34.771 } 00:16:34.771 } 00:16:34.771 ]' 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.771 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.028 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.028 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.028 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.286 17:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.218 17:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.782 00:16:36.782 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.782 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.782 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.039 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.039 { 00:16:37.039 "cntlid": 129, 00:16:37.039 "qid": 0, 00:16:37.039 "state": "enabled", 00:16:37.040 "thread": "nvmf_tgt_poll_group_000", 00:16:37.040 "listen_address": { 00:16:37.040 "trtype": "TCP", 00:16:37.040 "adrfam": "IPv4", 00:16:37.040 "traddr": "10.0.0.2", 00:16:37.040 "trsvcid": "4420" 00:16:37.040 }, 00:16:37.040 "peer_address": { 00:16:37.040 "trtype": "TCP", 00:16:37.040 "adrfam": "IPv4", 00:16:37.040 "traddr": "10.0.0.1", 00:16:37.040 "trsvcid": "37714" 00:16:37.040 }, 00:16:37.040 "auth": { 00:16:37.040 "state": "completed", 00:16:37.040 "digest": "sha512", 00:16:37.040 "dhgroup": "ffdhe6144" 00:16:37.040 } 00:16:37.040 } 00:16:37.040 ]' 00:16:37.040 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.040 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.040 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.297 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.297 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.297 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.297 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.297 17:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.553 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.485 17:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.742 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.306 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.306 17:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.307 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.307 { 00:16:39.307 "cntlid": 131, 00:16:39.307 "qid": 0, 00:16:39.307 "state": "enabled", 00:16:39.307 "thread": "nvmf_tgt_poll_group_000", 00:16:39.307 "listen_address": { 00:16:39.307 "trtype": "TCP", 00:16:39.307 "adrfam": "IPv4", 00:16:39.307 "traddr": "10.0.0.2", 00:16:39.307 "trsvcid": "4420" 00:16:39.307 }, 00:16:39.307 "peer_address": { 00:16:39.307 "trtype": "TCP", 00:16:39.307 "adrfam": "IPv4", 00:16:39.307 "traddr": "10.0.0.1", 00:16:39.307 "trsvcid": "37740" 00:16:39.307 }, 00:16:39.307 "auth": { 00:16:39.307 "state": "completed", 00:16:39.307 "digest": "sha512", 00:16:39.307 "dhgroup": "ffdhe6144" 00:16:39.307 } 00:16:39.307 } 00:16:39.307 ]' 00:16:39.564 17:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.564 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.821 17:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.752 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.009 17:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.573 00:16:41.573 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.573 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.573 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.573 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.574 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.574 17:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.574 17:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.831 { 00:16:41.831 "cntlid": 133, 00:16:41.831 "qid": 0, 00:16:41.831 "state": "enabled", 00:16:41.831 "thread": "nvmf_tgt_poll_group_000", 00:16:41.831 "listen_address": { 00:16:41.831 "trtype": "TCP", 00:16:41.831 "adrfam": "IPv4", 00:16:41.831 "traddr": "10.0.0.2", 00:16:41.831 "trsvcid": "4420" 00:16:41.831 }, 00:16:41.831 "peer_address": { 00:16:41.831 "trtype": "TCP", 00:16:41.831 "adrfam": "IPv4", 00:16:41.831 "traddr": "10.0.0.1", 00:16:41.831 "trsvcid": "51374" 00:16:41.831 }, 00:16:41.831 "auth": { 00:16:41.831 "state": "completed", 00:16:41.831 "digest": "sha512", 00:16:41.831 "dhgroup": "ffdhe6144" 00:16:41.831 } 00:16:41.831 } 00:16:41.831 ]' 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.831 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.089 17:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.021 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.278 17:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.842 00:16:43.842 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.842 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.842 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.101 { 00:16:44.101 "cntlid": 135, 00:16:44.101 "qid": 0, 00:16:44.101 "state": "enabled", 00:16:44.101 "thread": "nvmf_tgt_poll_group_000", 00:16:44.101 "listen_address": { 00:16:44.101 "trtype": "TCP", 00:16:44.101 "adrfam": "IPv4", 00:16:44.101 "traddr": "10.0.0.2", 00:16:44.101 "trsvcid": "4420" 00:16:44.101 }, 00:16:44.101 "peer_address": { 00:16:44.101 "trtype": "TCP", 00:16:44.101 "adrfam": "IPv4", 00:16:44.101 "traddr": "10.0.0.1", 00:16:44.101 "trsvcid": "51388" 00:16:44.101 }, 00:16:44.101 "auth": { 00:16:44.101 "state": "completed", 00:16:44.101 "digest": "sha512", 00:16:44.101 "dhgroup": "ffdhe6144" 00:16:44.101 } 00:16:44.101 } 00:16:44.101 ]' 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.101 17:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.403 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.359 17:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.617 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.550 00:16:46.550 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.550 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.550 17:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.550 { 00:16:46.550 "cntlid": 137, 00:16:46.550 "qid": 0, 00:16:46.550 "state": "enabled", 00:16:46.550 "thread": "nvmf_tgt_poll_group_000", 00:16:46.550 "listen_address": { 00:16:46.550 "trtype": "TCP", 00:16:46.550 "adrfam": "IPv4", 00:16:46.550 "traddr": "10.0.0.2", 00:16:46.550 "trsvcid": "4420" 00:16:46.550 }, 00:16:46.550 "peer_address": { 00:16:46.550 "trtype": "TCP", 00:16:46.550 "adrfam": "IPv4", 00:16:46.550 "traddr": "10.0.0.1", 00:16:46.550 "trsvcid": "51412" 00:16:46.550 }, 00:16:46.550 "auth": { 00:16:46.550 "state": "completed", 00:16:46.550 "digest": "sha512", 00:16:46.550 "dhgroup": "ffdhe8192" 00:16:46.550 } 00:16:46.550 } 00:16:46.550 ]' 00:16:46.550 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.807 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.065 17:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.998 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.256 17:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.188 00:16:49.188 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.188 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.188 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.445 { 00:16:49.445 "cntlid": 139, 00:16:49.445 "qid": 0, 00:16:49.445 "state": "enabled", 00:16:49.445 "thread": "nvmf_tgt_poll_group_000", 00:16:49.445 "listen_address": { 00:16:49.445 "trtype": "TCP", 00:16:49.445 "adrfam": "IPv4", 00:16:49.445 "traddr": "10.0.0.2", 00:16:49.445 "trsvcid": "4420" 00:16:49.445 }, 00:16:49.445 "peer_address": { 00:16:49.445 "trtype": "TCP", 00:16:49.445 "adrfam": "IPv4", 00:16:49.445 "traddr": "10.0.0.1", 00:16:49.445 "trsvcid": "51446" 00:16:49.445 }, 00:16:49.445 "auth": { 00:16:49.445 "state": "completed", 00:16:49.445 "digest": "sha512", 00:16:49.445 "dhgroup": "ffdhe8192" 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ]' 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.445 17:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.445 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.445 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.445 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.445 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.445 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.703 17:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:ZWE0NTQzOTk3MmU4MTNkNzU0Yjk1NWRlZjcwY2EyNzbhkDpr: --dhchap-ctrl-secret DHHC-1:02:MTA3MGQzYmVjYmMyN2I3YzY1ZWRkMzY4MWYzYjVjZmVjN2EzOTczYjAxZjllOTM2SxmZVQ==: 00:16:50.635 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.636 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.893 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:50.893 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.893 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.893 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.893 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.894 17:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.848 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.848 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.848 { 00:16:51.848 "cntlid": 141, 00:16:51.848 "qid": 0, 00:16:51.848 "state": "enabled", 00:16:51.848 "thread": "nvmf_tgt_poll_group_000", 00:16:51.848 "listen_address": { 00:16:51.848 "trtype": "TCP", 00:16:51.848 "adrfam": "IPv4", 00:16:51.848 "traddr": "10.0.0.2", 00:16:51.848 "trsvcid": "4420" 00:16:51.848 }, 00:16:51.848 "peer_address": { 00:16:51.848 "trtype": "TCP", 00:16:51.848 "adrfam": "IPv4", 00:16:51.848 "traddr": "10.0.0.1", 00:16:51.848 "trsvcid": "46294" 00:16:51.848 }, 00:16:51.848 "auth": { 00:16:51.848 "state": "completed", 00:16:51.848 "digest": "sha512", 00:16:51.848 "dhgroup": "ffdhe8192" 00:16:51.848 } 00:16:51.848 } 00:16:51.848 ]' 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.106 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.363 17:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzBhZjI2Y2IyOWJjMDcxMTY1YjEwNzA5NzJlYzIwNTM1YWFhYmNhY2I3MmRmYzYwPYraQA==: --dhchap-ctrl-secret DHHC-1:01:MTM4N2VmMGJjNGRiZDM4MjdkNjY3ODhmMWI2ZWI0MTfgnYOW: 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.296 17:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.553 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.484 00:16:54.484 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.484 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.484 17:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.740 { 00:16:54.740 "cntlid": 143, 00:16:54.740 "qid": 0, 00:16:54.740 "state": "enabled", 00:16:54.740 "thread": "nvmf_tgt_poll_group_000", 00:16:54.740 "listen_address": { 00:16:54.740 "trtype": "TCP", 00:16:54.740 "adrfam": "IPv4", 00:16:54.740 "traddr": "10.0.0.2", 00:16:54.740 "trsvcid": "4420" 00:16:54.740 }, 00:16:54.740 "peer_address": { 00:16:54.740 "trtype": "TCP", 00:16:54.740 "adrfam": "IPv4", 00:16:54.740 "traddr": "10.0.0.1", 00:16:54.740 "trsvcid": "46328" 00:16:54.740 }, 00:16:54.740 "auth": { 00:16:54.740 "state": "completed", 00:16:54.740 "digest": "sha512", 00:16:54.740 "dhgroup": "ffdhe8192" 00:16:54.740 } 00:16:54.740 } 00:16:54.740 ]' 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.740 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.997 17:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.929 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.930 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.187 17:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.118 00:16:57.118 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.118 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.118 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.376 { 00:16:57.376 "cntlid": 145, 00:16:57.376 "qid": 0, 00:16:57.376 "state": "enabled", 00:16:57.376 "thread": "nvmf_tgt_poll_group_000", 00:16:57.376 "listen_address": { 00:16:57.376 "trtype": "TCP", 00:16:57.376 "adrfam": "IPv4", 00:16:57.376 "traddr": "10.0.0.2", 00:16:57.376 "trsvcid": "4420" 00:16:57.376 }, 00:16:57.376 "peer_address": { 00:16:57.376 "trtype": "TCP", 00:16:57.376 "adrfam": "IPv4", 00:16:57.376 "traddr": "10.0.0.1", 00:16:57.376 "trsvcid": "46350" 00:16:57.376 }, 00:16:57.376 "auth": { 00:16:57.376 "state": "completed", 00:16:57.376 "digest": "sha512", 00:16:57.376 "dhgroup": "ffdhe8192" 00:16:57.376 } 00:16:57.376 } 00:16:57.376 ]' 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.376 17:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.376 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.376 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.633 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.633 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.633 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.891 17:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:NjE1NDYwZjE0OTE1NTM0ZTRhMmE4ZWMxYjljODcyNWIyYTYzOThlM2U5YzUzMmRik7hf0A==: --dhchap-ctrl-secret DHHC-1:03:MzNiNDAwNGU0MzJiYzI4MTNiZWNhNThmMjkzZjc4MDFlMzkxNzc1Y2NkNGY4MzY3Y2RhMTQxM2Q3MWMwMTNiZmmQmLY=: 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.822 17:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:59.385 request: 00:16:59.385 { 00:16:59.385 "name": "nvme0", 00:16:59.385 "trtype": "tcp", 00:16:59.385 "traddr": "10.0.0.2", 00:16:59.385 "adrfam": "ipv4", 00:16:59.385 "trsvcid": "4420", 00:16:59.385 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:59.385 "prchk_reftag": false, 00:16:59.385 "prchk_guard": false, 00:16:59.385 "hdgst": false, 00:16:59.385 "ddgst": false, 00:16:59.385 "dhchap_key": "key2", 00:16:59.385 "method": "bdev_nvme_attach_controller", 00:16:59.385 "req_id": 1 00:16:59.385 } 00:16:59.385 Got JSON-RPC error response 00:16:59.385 response: 00:16:59.385 { 00:16:59.385 "code": -5, 00:16:59.385 "message": "Input/output error" 00:16:59.385 } 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.385 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:59.641 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.204 request: 00:17:00.204 { 00:17:00.204 "name": "nvme0", 00:17:00.204 "trtype": "tcp", 00:17:00.204 "traddr": "10.0.0.2", 00:17:00.204 "adrfam": "ipv4", 00:17:00.204 "trsvcid": "4420", 00:17:00.204 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:00.204 "prchk_reftag": false, 00:17:00.204 "prchk_guard": false, 00:17:00.204 "hdgst": false, 00:17:00.204 "ddgst": false, 00:17:00.204 "dhchap_key": "key1", 00:17:00.204 "dhchap_ctrlr_key": "ckey2", 00:17:00.204 "method": "bdev_nvme_attach_controller", 00:17:00.204 "req_id": 1 00:17:00.204 } 00:17:00.204 Got JSON-RPC error response 00:17:00.204 response: 00:17:00.204 { 00:17:00.204 "code": -5, 00:17:00.204 "message": "Input/output error" 00:17:00.204 } 00:17:00.204 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.205 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.461 17:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.026 request: 00:17:01.026 { 00:17:01.026 "name": "nvme0", 00:17:01.026 "trtype": "tcp", 00:17:01.026 "traddr": "10.0.0.2", 00:17:01.026 "adrfam": "ipv4", 00:17:01.026 "trsvcid": "4420", 00:17:01.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:01.026 "prchk_reftag": false, 00:17:01.026 "prchk_guard": false, 00:17:01.026 "hdgst": false, 00:17:01.026 "ddgst": false, 00:17:01.026 "dhchap_key": "key1", 00:17:01.026 "dhchap_ctrlr_key": "ckey1", 00:17:01.026 "method": "bdev_nvme_attach_controller", 00:17:01.026 "req_id": 1 00:17:01.026 } 00:17:01.026 Got JSON-RPC error response 00:17:01.026 response: 00:17:01.026 { 00:17:01.026 "code": -5, 00:17:01.026 "message": "Input/output error" 00:17:01.026 } 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1114597 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1114597 ']' 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1114597 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114597 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114597' 00:17:01.026 killing process with pid 1114597 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1114597 00:17:01.026 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1114597 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1136057 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1136057 00:17:01.283 17:06:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1136057 ']' 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.541 17:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.541 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.541 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:01.541 17:06:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.541 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.541 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1136057 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1136057 ']' 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.797 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.798 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.798 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.054 17:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.055 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.055 17:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.983 00:17:02.983 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.983 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.983 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.240 { 00:17:03.240 "cntlid": 1, 00:17:03.240 "qid": 0, 00:17:03.240 "state": "enabled", 00:17:03.240 "thread": "nvmf_tgt_poll_group_000", 00:17:03.240 "listen_address": { 00:17:03.240 "trtype": "TCP", 00:17:03.240 "adrfam": "IPv4", 00:17:03.240 "traddr": "10.0.0.2", 00:17:03.240 "trsvcid": "4420" 00:17:03.240 }, 00:17:03.240 "peer_address": { 00:17:03.240 "trtype": "TCP", 00:17:03.240 "adrfam": "IPv4", 00:17:03.240 "traddr": "10.0.0.1", 00:17:03.240 "trsvcid": "36282" 00:17:03.240 }, 00:17:03.240 "auth": { 00:17:03.240 "state": "completed", 00:17:03.240 "digest": "sha512", 00:17:03.240 "dhgroup": "ffdhe8192" 00:17:03.240 } 00:17:03.240 } 00:17:03.240 ]' 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.240 17:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.496 17:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTllN2IzYWI2MTQzYmI2ZmEyYTIwOTRjNjczZWUzZDJkOGI2MGU1MGNkMjc3MWJlZjQ4ZGU2NGRmY2VmYWFiZBnUjIE=: 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:04.425 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:04.682 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.683 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.939 request: 00:17:04.939 { 00:17:04.939 "name": "nvme0", 00:17:04.939 "trtype": "tcp", 00:17:04.939 "traddr": "10.0.0.2", 00:17:04.939 "adrfam": "ipv4", 00:17:04.939 "trsvcid": "4420", 00:17:04.939 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:04.939 "prchk_reftag": false, 00:17:04.939 "prchk_guard": false, 00:17:04.939 "hdgst": false, 00:17:04.939 "ddgst": false, 00:17:04.939 "dhchap_key": "key3", 00:17:04.939 "method": "bdev_nvme_attach_controller", 00:17:04.939 "req_id": 1 00:17:04.939 } 00:17:04.939 Got JSON-RPC error response 00:17:04.939 response: 00:17:04.939 { 00:17:04.939 "code": -5, 00:17:04.939 "message": "Input/output error" 00:17:04.939 } 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:04.939 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.196 17:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.454 request: 00:17:05.454 { 00:17:05.454 "name": "nvme0", 00:17:05.454 "trtype": "tcp", 00:17:05.454 "traddr": "10.0.0.2", 00:17:05.454 "adrfam": "ipv4", 00:17:05.454 "trsvcid": "4420", 00:17:05.454 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:05.454 "prchk_reftag": false, 00:17:05.454 "prchk_guard": false, 00:17:05.454 "hdgst": false, 00:17:05.454 "ddgst": false, 00:17:05.454 "dhchap_key": "key3", 00:17:05.454 "method": "bdev_nvme_attach_controller", 00:17:05.454 "req_id": 1 00:17:05.454 } 00:17:05.454 Got JSON-RPC error response 00:17:05.454 response: 00:17:05.454 { 00:17:05.454 "code": -5, 00:17:05.454 "message": "Input/output error" 00:17:05.454 } 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.454 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.711 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:05.969 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.228 request: 00:17:06.228 { 00:17:06.228 "name": "nvme0", 00:17:06.228 "trtype": "tcp", 00:17:06.228 "traddr": "10.0.0.2", 00:17:06.228 "adrfam": "ipv4", 00:17:06.228 "trsvcid": "4420", 00:17:06.228 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.228 "prchk_reftag": false, 00:17:06.228 "prchk_guard": false, 00:17:06.228 "hdgst": false, 00:17:06.228 "ddgst": false, 00:17:06.228 "dhchap_key": "key0", 00:17:06.228 "dhchap_ctrlr_key": "key1", 00:17:06.228 "method": "bdev_nvme_attach_controller", 00:17:06.228 "req_id": 1 00:17:06.228 } 00:17:06.228 Got JSON-RPC error response 00:17:06.228 response: 00:17:06.228 { 00:17:06.228 "code": -5, 00:17:06.228 "message": "Input/output error" 00:17:06.228 } 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:06.228 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:06.519 00:17:06.519 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:06.519 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:06.519 17:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.800 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.800 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.800 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1114745 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1114745 ']' 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1114745 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114745 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114745' 00:17:07.057 killing process with pid 1114745 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1114745 00:17:07.057 17:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1114745 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.314 17:06:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.314 rmmod nvme_tcp 00:17:07.572 rmmod nvme_fabrics 00:17:07.572 rmmod nvme_keyring 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1136057 ']' 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1136057 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1136057 ']' 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1136057 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1136057 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1136057' 00:17:07.572 killing process with pid 1136057 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1136057 00:17:07.572 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1136057 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.829 17:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.733 17:06:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.733 17:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.7yc /tmp/spdk.key-sha256.Oll /tmp/spdk.key-sha384.tkn /tmp/spdk.key-sha512.LQ1 /tmp/spdk.key-sha512.hcS /tmp/spdk.key-sha384.Nxp /tmp/spdk.key-sha256.Qjq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:09.733 00:17:09.733 real 3m2.013s 00:17:09.733 user 7m5.833s 00:17:09.733 sys 0m25.455s 00:17:09.733 17:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.733 17:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.733 ************************************ 00:17:09.733 END TEST nvmf_auth_target 00:17:09.733 ************************************ 00:17:09.733 17:06:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.733 17:06:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:09.733 17:06:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:09.733 17:06:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:09.733 17:06:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.733 17:06:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.990 ************************************ 00:17:09.990 START TEST nvmf_bdevio_no_huge 00:17:09.990 ************************************ 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:09.990 * Looking for test storage... 00:17:09.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.990 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.991 17:06:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:11.903 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:11.903 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:11.903 Found net devices under 0000:84:00.0: cvl_0_0 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:11.903 Found net devices under 0000:84:00.1: cvl_0_1 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.903 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:17:12.161 00:17:12.161 --- 10.0.0.2 ping statistics --- 00:17:12.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.161 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:17:12.161 00:17:12.161 --- 10.0.0.1 ping statistics --- 00:17:12.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.161 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1139279 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1139279 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1139279 ']' 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.161 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.162 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.162 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.162 17:06:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.162 [2024-07-12 17:06:11.764260] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:12.162 [2024-07-12 17:06:11.764361] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:12.162 [2024-07-12 17:06:11.840608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.419 [2024-07-12 17:06:11.949822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.419 [2024-07-12 17:06:11.949885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.419 [2024-07-12 17:06:11.949914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.419 [2024-07-12 17:06:11.949926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.419 [2024-07-12 17:06:11.949937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.419 [2024-07-12 17:06:11.950028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:12.419 [2024-07-12 17:06:11.950132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:12.419 [2024-07-12 17:06:11.950095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:12.419 [2024-07-12 17:06:11.953765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.419 [2024-07-12 17:06:12.077265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.419 Malloc0 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.419 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.420 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.677 [2024-07-12 17:06:12.114606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:12.677 { 00:17:12.677 "params": { 00:17:12.677 "name": "Nvme$subsystem", 00:17:12.677 "trtype": "$TEST_TRANSPORT", 00:17:12.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:12.677 "adrfam": "ipv4", 00:17:12.677 "trsvcid": "$NVMF_PORT", 00:17:12.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:12.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:12.677 "hdgst": ${hdgst:-false}, 00:17:12.677 "ddgst": ${ddgst:-false} 00:17:12.677 }, 00:17:12.677 "method": "bdev_nvme_attach_controller" 00:17:12.677 } 00:17:12.677 EOF 00:17:12.677 )") 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:12.677 17:06:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:12.677 "params": { 00:17:12.677 "name": "Nvme1", 00:17:12.677 "trtype": "tcp", 00:17:12.677 "traddr": "10.0.0.2", 00:17:12.677 "adrfam": "ipv4", 00:17:12.677 "trsvcid": "4420", 00:17:12.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.677 "hdgst": false, 00:17:12.677 "ddgst": false 00:17:12.677 }, 00:17:12.677 "method": "bdev_nvme_attach_controller" 00:17:12.677 }' 00:17:12.677 [2024-07-12 17:06:12.158189] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:12.677 [2024-07-12 17:06:12.158278] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1139422 ] 00:17:12.677 [2024-07-12 17:06:12.221337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.677 [2024-07-12 17:06:12.332866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.677 [2024-07-12 17:06:12.332915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.677 [2024-07-12 17:06:12.332918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.935 I/O targets: 00:17:12.935 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:12.935 00:17:12.935 00:17:12.935 CUnit - A unit testing framework for C - Version 2.1-3 00:17:12.935 http://cunit.sourceforge.net/ 00:17:12.935 00:17:12.935 00:17:12.935 Suite: bdevio tests on: Nvme1n1 00:17:12.935 Test: blockdev write read block ...passed 00:17:12.935 Test: blockdev write zeroes read block ...passed 00:17:12.935 Test: blockdev write zeroes read no split ...passed 00:17:13.192 Test: blockdev write zeroes read split ...passed 00:17:13.192 Test: blockdev write zeroes read split partial ...passed 00:17:13.192 Test: blockdev reset ...[2024-07-12 17:06:12.652906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:13.192 [2024-07-12 17:06:12.653031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad5670 (9): Bad file descriptor 00:17:13.192 [2024-07-12 17:06:12.669553] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:13.192 passed 00:17:13.192 Test: blockdev write read 8 blocks ...passed 00:17:13.192 Test: blockdev write read size > 128k ...passed 00:17:13.192 Test: blockdev write read invalid size ...passed 00:17:13.192 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:13.192 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:13.192 Test: blockdev write read max offset ...passed 00:17:13.192 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:13.192 Test: blockdev writev readv 8 blocks ...passed 00:17:13.192 Test: blockdev writev readv 30 x 1block ...passed 00:17:13.450 Test: blockdev writev readv block ...passed 00:17:13.450 Test: blockdev writev readv size > 128k ...passed 00:17:13.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:13.450 Test: blockdev comparev and writev ...[2024-07-12 17:06:12.924991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.925051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.925068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.925527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.925552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.925574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.925590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.926030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.926055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.926079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.926103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.926550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.926574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:12.926595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:13.450 [2024-07-12 17:06:12.926610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:13.450 passed 00:17:13.450 Test: blockdev nvme passthru rw ...passed 00:17:13.450 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:06:13.010224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.450 [2024-07-12 17:06:13.010255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:13.010544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.450 [2024-07-12 17:06:13.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:13.010786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.450 [2024-07-12 17:06:13.010810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:13.450 [2024-07-12 17:06:13.010962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:13.450 [2024-07-12 17:06:13.010984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:13.450 passed 00:17:13.450 Test: blockdev nvme admin passthru ...passed 00:17:13.450 Test: blockdev copy ...passed 00:17:13.450 00:17:13.450 Run Summary: Type Total Ran Passed Failed Inactive 00:17:13.450 suites 1 1 n/a 0 0 00:17:13.450 tests 23 23 23 0 0 00:17:13.450 asserts 152 152 152 0 n/a 00:17:13.450 00:17:13.450 Elapsed time = 1.145 seconds 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.015 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.015 rmmod nvme_tcp 00:17:14.015 rmmod nvme_fabrics 00:17:14.015 rmmod nvme_keyring 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1139279 ']' 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1139279 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1139279 ']' 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1139279 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1139279 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1139279' 00:17:14.016 killing process with pid 1139279 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1139279 00:17:14.016 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1139279 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.274 17:06:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.809 17:06:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.809 00:17:16.809 real 0m6.526s 00:17:16.809 user 0m10.330s 00:17:16.809 sys 0m2.487s 00:17:16.809 17:06:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.809 17:06:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.809 ************************************ 00:17:16.809 END TEST nvmf_bdevio_no_huge 00:17:16.809 ************************************ 00:17:16.809 17:06:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.809 17:06:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:16.809 17:06:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.809 17:06:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.809 17:06:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.809 ************************************ 00:17:16.809 START TEST nvmf_tls 00:17:16.809 ************************************ 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:16.809 * Looking for test storage... 00:17:16.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.809 17:06:16 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.810 17:06:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:18.725 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:18.725 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:18.725 Found net devices under 0000:84:00.0: cvl_0_0 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:18.725 Found net devices under 0000:84:00.1: cvl_0_1 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:17:18.725 00:17:18.725 --- 10.0.0.2 ping statistics --- 00:17:18.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.725 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:18.725 00:17:18.725 --- 10.0.0.1 ping statistics --- 00:17:18.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.725 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1141506 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1141506 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1141506 ']' 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.725 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.725 [2024-07-12 17:06:18.370986] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:18.726 [2024-07-12 17:06:18.371082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.726 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.984 [2024-07-12 17:06:18.438819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.984 [2024-07-12 17:06:18.550684] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.984 [2024-07-12 17:06:18.550762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.984 [2024-07-12 17:06:18.550787] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.984 [2024-07-12 17:06:18.550798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.984 [2024-07-12 17:06:18.550807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.984 [2024-07-12 17:06:18.550846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:18.984 17:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:19.242 true 00:17:19.242 17:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:19.242 17:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:19.500 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:19.500 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:19.500 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:19.757 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:19.757 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:20.014 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:20.014 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:20.014 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:20.272 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.272 17:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:20.528 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:20.528 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:20.528 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.528 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:20.785 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:20.785 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:20.785 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:21.043 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.043 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:21.301 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:21.301 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:21.301 17:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:21.559 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.559 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:21.817 17:06:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.MNIkLBCuiZ 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sgZw8IIqA0 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MNIkLBCuiZ 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sgZw8IIqA0 00:17:22.074 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:22.331 17:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:22.589 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MNIkLBCuiZ 00:17:22.590 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MNIkLBCuiZ 00:17:22.590 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.847 [2024-07-12 17:06:22.407956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.847 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.104 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.361 [2024-07-12 17:06:22.945415] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.361 [2024-07-12 17:06:22.945692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.361 17:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.618 malloc0 00:17:23.618 17:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:23.875 17:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MNIkLBCuiZ 00:17:24.133 [2024-07-12 17:06:23.678812] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:24.133 17:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MNIkLBCuiZ 00:17:24.133 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.093 Initializing NVMe Controllers 00:17:34.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:34.093 Initialization complete. Launching workers. 00:17:34.093 ======================================================== 00:17:34.093 Latency(us) 00:17:34.093 Device Information : IOPS MiB/s Average min max 00:17:34.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8727.73 34.09 7334.90 1020.13 8733.66 00:17:34.093 ======================================================== 00:17:34.093 Total : 8727.73 34.09 7334.90 1020.13 8733.66 00:17:34.093 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNIkLBCuiZ 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MNIkLBCuiZ' 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1143351 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1143351 /var/tmp/bdevperf.sock 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1143351 ']' 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.352 17:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.352 [2024-07-12 17:06:33.831567] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:34.352 [2024-07-12 17:06:33.831673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143351 ] 00:17:34.352 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.352 [2024-07-12 17:06:33.889949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.352 [2024-07-12 17:06:33.997342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.609 17:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.609 17:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:34.610 17:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MNIkLBCuiZ 00:17:34.866 [2024-07-12 17:06:34.330150] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.866 [2024-07-12 17:06:34.330267] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:34.866 TLSTESTn1 00:17:34.866 17:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:34.866 Running I/O for 10 seconds... 00:17:47.074 00:17:47.074 Latency(us) 00:17:47.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.074 Verification LBA range: start 0x0 length 0x2000 00:17:47.074 TLSTESTn1 : 10.02 3545.53 13.85 0.00 0.00 36040.58 7815.77 59419.31 00:17:47.074 =================================================================================================================== 00:17:47.074 Total : 3545.53 13.85 0.00 0.00 36040.58 7815.77 59419.31 00:17:47.074 0 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1143351 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1143351 ']' 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1143351 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143351 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143351' 00:17:47.074 killing process with pid 1143351 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1143351 00:17:47.074 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.074 00:17:47.074 Latency(us) 00:17:47.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.074 =================================================================================================================== 00:17:47.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.074 [2024-07-12 17:06:44.607589] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1143351 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sgZw8IIqA0 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sgZw8IIqA0 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sgZw8IIqA0 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sgZw8IIqA0' 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1144597 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1144597 /var/tmp/bdevperf.sock 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144597 ']' 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.074 17:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.074 [2024-07-12 17:06:44.925471] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:47.074 [2024-07-12 17:06:44.925557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144597 ] 00:17:47.074 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.074 [2024-07-12 17:06:44.989912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.074 [2024-07-12 17:06:45.106236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.074 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.074 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.074 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sgZw8IIqA0 00:17:47.074 [2024-07-12 17:06:45.489333] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.074 [2024-07-12 17:06:45.489483] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.074 [2024-07-12 17:06:45.498293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.074 [2024-07-12 17:06:45.498320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f6d0 (107): Transport endpoint is not connected 00:17:47.074 [2024-07-12 17:06:45.499287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f6d0 (9): Bad file descriptor 00:17:47.074 [2024-07-12 17:06:45.500287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.074 [2024-07-12 17:06:45.500311] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.074 [2024-07-12 17:06:45.500338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.074 request: 00:17:47.074 { 00:17:47.074 "name": "TLSTEST", 00:17:47.074 "trtype": "tcp", 00:17:47.074 "traddr": "10.0.0.2", 00:17:47.074 "adrfam": "ipv4", 00:17:47.074 "trsvcid": "4420", 00:17:47.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.074 "prchk_reftag": false, 00:17:47.074 "prchk_guard": false, 00:17:47.074 "hdgst": false, 00:17:47.074 "ddgst": false, 00:17:47.074 "psk": "/tmp/tmp.sgZw8IIqA0", 00:17:47.074 "method": "bdev_nvme_attach_controller", 00:17:47.074 "req_id": 1 00:17:47.074 } 00:17:47.074 Got JSON-RPC error response 00:17:47.074 response: 00:17:47.074 { 00:17:47.074 "code": -5, 00:17:47.074 "message": "Input/output error" 00:17:47.074 } 00:17:47.074 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1144597 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144597 ']' 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144597 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144597 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144597' 00:17:47.075 killing process with pid 1144597 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144597 00:17:47.075 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.075 00:17:47.075 Latency(us) 00:17:47.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.075 =================================================================================================================== 00:17:47.075 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.075 [2024-07-12 17:06:45.548514] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144597 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNIkLBCuiZ 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNIkLBCuiZ 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MNIkLBCuiZ 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MNIkLBCuiZ' 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1144739 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1144739 /var/tmp/bdevperf.sock 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144739 ']' 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.075 17:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.075 [2024-07-12 17:06:45.859860] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:47.075 [2024-07-12 17:06:45.859938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144739 ] 00:17:47.075 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.075 [2024-07-12 17:06:45.917471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.075 [2024-07-12 17:06:46.020063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MNIkLBCuiZ 00:17:47.075 [2024-07-12 17:06:46.397967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.075 [2024-07-12 17:06:46.398105] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.075 [2024-07-12 17:06:46.403111] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:47.075 [2024-07-12 17:06:46.403141] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:47.075 [2024-07-12 17:06:46.403186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.075 [2024-07-12 17:06:46.403788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f96d0 (107): Transport endpoint is not connected 00:17:47.075 [2024-07-12 17:06:46.404776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f96d0 (9): Bad file descriptor 00:17:47.075 [2024-07-12 17:06:46.405776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.075 [2024-07-12 17:06:46.405813] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.075 [2024-07-12 17:06:46.405841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.075 request: 00:17:47.075 { 00:17:47.075 "name": "TLSTEST", 00:17:47.075 "trtype": "tcp", 00:17:47.075 "traddr": "10.0.0.2", 00:17:47.075 "adrfam": "ipv4", 00:17:47.075 "trsvcid": "4420", 00:17:47.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.075 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:47.075 "prchk_reftag": false, 00:17:47.075 "prchk_guard": false, 00:17:47.075 "hdgst": false, 00:17:47.075 "ddgst": false, 00:17:47.075 "psk": "/tmp/tmp.MNIkLBCuiZ", 00:17:47.075 "method": "bdev_nvme_attach_controller", 00:17:47.075 "req_id": 1 00:17:47.075 } 00:17:47.075 Got JSON-RPC error response 00:17:47.075 response: 00:17:47.075 { 00:17:47.075 "code": -5, 00:17:47.075 "message": "Input/output error" 00:17:47.075 } 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1144739 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144739 ']' 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144739 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144739 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144739' 00:17:47.075 killing process with pid 1144739 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144739 00:17:47.075 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.075 00:17:47.075 Latency(us) 00:17:47.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.075 =================================================================================================================== 00:17:47.075 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.075 [2024-07-12 17:06:46.454901] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144739 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNIkLBCuiZ 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNIkLBCuiZ 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.075 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MNIkLBCuiZ 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MNIkLBCuiZ' 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1144874 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1144874 /var/tmp/bdevperf.sock 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144874 ']' 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.076 17:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.076 [2024-07-12 17:06:46.764223] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:47.076 [2024-07-12 17:06:46.764302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144874 ] 00:17:47.333 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.333 [2024-07-12 17:06:46.823588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.333 [2024-07-12 17:06:46.927613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.333 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.333 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.590 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MNIkLBCuiZ 00:17:47.591 [2024-07-12 17:06:47.256058] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.591 [2024-07-12 17:06:47.256180] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.591 [2024-07-12 17:06:47.262729] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:47.591 [2024-07-12 17:06:47.262781] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:47.591 [2024-07-12 17:06:47.262822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.591 [2024-07-12 17:06:47.263091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11156d0 (107): Transport endpoint is not connected 00:17:47.591 [2024-07-12 17:06:47.264076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11156d0 (9): Bad file descriptor 00:17:47.591 [2024-07-12 17:06:47.265075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:47.591 [2024-07-12 17:06:47.265099] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.591 [2024-07-12 17:06:47.265128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:47.591 request: 00:17:47.591 { 00:17:47.591 "name": "TLSTEST", 00:17:47.591 "trtype": "tcp", 00:17:47.591 "traddr": "10.0.0.2", 00:17:47.591 "adrfam": "ipv4", 00:17:47.591 "trsvcid": "4420", 00:17:47.591 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:47.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.591 "prchk_reftag": false, 00:17:47.591 "prchk_guard": false, 00:17:47.591 "hdgst": false, 00:17:47.591 "ddgst": false, 00:17:47.591 "psk": "/tmp/tmp.MNIkLBCuiZ", 00:17:47.591 "method": "bdev_nvme_attach_controller", 00:17:47.591 "req_id": 1 00:17:47.591 } 00:17:47.591 Got JSON-RPC error response 00:17:47.591 response: 00:17:47.591 { 00:17:47.591 "code": -5, 00:17:47.591 "message": "Input/output error" 00:17:47.591 } 00:17:47.591 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1144874 00:17:47.591 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144874 ']' 00:17:47.591 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144874 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144874 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144874' 00:17:47.848 killing process with pid 1144874 00:17:47.848 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144874 00:17:47.848 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.848 00:17:47.849 Latency(us) 00:17:47.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.849 =================================================================================================================== 00:17:47.849 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.849 [2024-07-12 17:06:47.310211] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.849 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144874 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1145014 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1145014 /var/tmp/bdevperf.sock 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1145014 ']' 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.107 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.107 [2024-07-12 17:06:47.602986] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:48.107 [2024-07-12 17:06:47.603072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145014 ] 00:17:48.107 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.107 [2024-07-12 17:06:47.660676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.108 [2024-07-12 17:06:47.765616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.365 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.365 17:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.365 17:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:48.622 [2024-07-12 17:06:48.105006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:48.622 [2024-07-12 17:06:48.106555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a9e10 (9): Bad file descriptor 00:17:48.622 [2024-07-12 17:06:48.107554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:48.622 [2024-07-12 17:06:48.107577] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:48.622 [2024-07-12 17:06:48.107603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:48.622 request: 00:17:48.622 { 00:17:48.622 "name": "TLSTEST", 00:17:48.622 "trtype": "tcp", 00:17:48.622 "traddr": "10.0.0.2", 00:17:48.622 "adrfam": "ipv4", 00:17:48.622 "trsvcid": "4420", 00:17:48.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.622 "prchk_reftag": false, 00:17:48.622 "prchk_guard": false, 00:17:48.622 "hdgst": false, 00:17:48.622 "ddgst": false, 00:17:48.622 "method": "bdev_nvme_attach_controller", 00:17:48.622 "req_id": 1 00:17:48.622 } 00:17:48.622 Got JSON-RPC error response 00:17:48.622 response: 00:17:48.622 { 00:17:48.622 "code": -5, 00:17:48.622 "message": "Input/output error" 00:17:48.622 } 00:17:48.622 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1145014 00:17:48.622 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1145014 ']' 00:17:48.622 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1145014 00:17:48.622 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.622 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145014 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145014' 00:17:48.623 killing process with pid 1145014 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1145014 00:17:48.623 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.623 00:17:48.623 Latency(us) 00:17:48.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.623 =================================================================================================================== 00:17:48.623 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.623 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1145014 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1141506 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1141506 ']' 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1141506 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1141506 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1141506' 00:17:48.880 killing process with pid 1141506 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1141506 00:17:48.880 [2024-07-12 17:06:48.442957] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.880 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1141506 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.D3xpQwu1bD 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.D3xpQwu1bD 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1145165 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1145165 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1145165 ']' 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.138 17:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.138 [2024-07-12 17:06:48.819770] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:49.138 [2024-07-12 17:06:48.819850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.396 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.396 [2024-07-12 17:06:48.882826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.396 [2024-07-12 17:06:48.988250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.396 [2024-07-12 17:06:48.988303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.397 [2024-07-12 17:06:48.988333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.397 [2024-07-12 17:06:48.988346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.397 [2024-07-12 17:06:48.988357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.397 [2024-07-12 17:06:48.988382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.D3xpQwu1bD 00:17:49.654 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.911 [2024-07-12 17:06:49.405640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.911 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.168 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.424 [2024-07-12 17:06:49.919005] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.424 [2024-07-12 17:06:49.919239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.424 17:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.680 malloc0 00:17:50.681 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.938 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:17:51.194 [2024-07-12 17:06:50.771923] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3xpQwu1bD 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.D3xpQwu1bD' 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1145448 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1145448 /var/tmp/bdevperf.sock 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1145448 ']' 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.194 17:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.194 [2024-07-12 17:06:50.836625] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:17:51.194 [2024-07-12 17:06:50.836697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145448 ] 00:17:51.194 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.451 [2024-07-12 17:06:50.895100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.451 [2024-07-12 17:06:50.999536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.451 17:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.451 17:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:51.451 17:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:17:51.706 [2024-07-12 17:06:51.385326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.706 [2024-07-12 17:06:51.385450] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:51.962 TLSTESTn1 00:17:51.962 17:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:51.962 Running I/O for 10 seconds... 00:18:04.153 00:18:04.153 Latency(us) 00:18:04.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.153 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.153 Verification LBA range: start 0x0 length 0x2000 00:18:04.153 TLSTESTn1 : 10.02 3518.23 13.74 0.00 0.00 36318.00 9175.04 57865.86 00:18:04.153 =================================================================================================================== 00:18:04.153 Total : 3518.23 13.74 0.00 0.00 36318.00 9175.04 57865.86 00:18:04.153 0 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1145448 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1145448 ']' 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1145448 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145448 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145448' 00:18:04.153 killing process with pid 1145448 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1145448 00:18:04.153 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.153 00:18:04.153 Latency(us) 00:18:04.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.153 =================================================================================================================== 00:18:04.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.153 [2024-07-12 17:07:01.680959] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1145448 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.D3xpQwu1bD 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3xpQwu1bD 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3xpQwu1bD 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D3xpQwu1bD 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.D3xpQwu1bD' 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1146647 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1146647 /var/tmp/bdevperf.sock 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146647 ']' 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.153 17:07:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.153 [2024-07-12 17:07:01.964141] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:04.153 [2024-07-12 17:07:01.964218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146647 ] 00:18:04.153 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.153 [2024-07-12 17:07:02.023153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.153 [2024-07-12 17:07:02.134665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.153 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.153 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.153 17:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:18:04.153 [2024-07-12 17:07:02.459803] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.153 [2024-07-12 17:07:02.459874] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:04.153 [2024-07-12 17:07:02.459895] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.D3xpQwu1bD 00:18:04.153 request: 00:18:04.153 { 00:18:04.153 "name": "TLSTEST", 00:18:04.153 "trtype": "tcp", 00:18:04.154 "traddr": "10.0.0.2", 00:18:04.154 "adrfam": "ipv4", 00:18:04.154 "trsvcid": "4420", 00:18:04.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.154 "prchk_reftag": false, 00:18:04.154 "prchk_guard": false, 00:18:04.154 "hdgst": false, 00:18:04.154 "ddgst": false, 00:18:04.154 "psk": "/tmp/tmp.D3xpQwu1bD", 00:18:04.154 "method": "bdev_nvme_attach_controller", 00:18:04.154 "req_id": 1 00:18:04.154 } 00:18:04.154 Got JSON-RPC error response 00:18:04.154 response: 00:18:04.154 { 00:18:04.154 "code": -1, 00:18:04.154 "message": "Operation not permitted" 00:18:04.154 } 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1146647 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146647 ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146647 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146647 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146647' 00:18:04.154 killing process with pid 1146647 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146647 00:18:04.154 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.154 00:18:04.154 Latency(us) 00:18:04.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.154 =================================================================================================================== 00:18:04.154 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146647 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1145165 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1145165 ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1145165 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145165 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145165' 00:18:04.154 killing process with pid 1145165 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1145165 00:18:04.154 [2024-07-12 17:07:02.797970] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.154 17:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1145165 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1146788 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1146788 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146788 ']' 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.154 [2024-07-12 17:07:03.127772] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:04.154 [2024-07-12 17:07:03.127855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.154 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.154 [2024-07-12 17:07:03.192748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.154 [2024-07-12 17:07:03.298839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.154 [2024-07-12 17:07:03.298893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.154 [2024-07-12 17:07:03.298922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.154 [2024-07-12 17:07:03.298934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.154 [2024-07-12 17:07:03.298945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.154 [2024-07-12 17:07:03.298976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.D3xpQwu1bD 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.154 [2024-07-12 17:07:03.659205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.154 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:04.411 17:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:04.669 [2024-07-12 17:07:04.136432] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.669 [2024-07-12 17:07:04.136657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.669 17:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:04.927 malloc0 00:18:04.927 17:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.184 17:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:18:05.441 [2024-07-12 17:07:05.013460] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:05.442 [2024-07-12 17:07:05.013499] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:05.442 [2024-07-12 17:07:05.013546] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:05.442 request: 00:18:05.442 { 00:18:05.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.442 "host": "nqn.2016-06.io.spdk:host1", 00:18:05.442 "psk": "/tmp/tmp.D3xpQwu1bD", 00:18:05.442 "method": "nvmf_subsystem_add_host", 00:18:05.442 "req_id": 1 00:18:05.442 } 00:18:05.442 Got JSON-RPC error response 00:18:05.442 response: 00:18:05.442 { 00:18:05.442 "code": -32603, 00:18:05.442 "message": "Internal error" 00:18:05.442 } 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1146788 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146788 ']' 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146788 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146788 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146788' 00:18:05.442 killing process with pid 1146788 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146788 00:18:05.442 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146788 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.D3xpQwu1bD 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1147084 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1147084 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1147084 ']' 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.700 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.700 [2024-07-12 17:07:05.353177] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:05.700 [2024-07-12 17:07:05.353255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.700 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.958 [2024-07-12 17:07:05.419969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.958 [2024-07-12 17:07:05.527042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.958 [2024-07-12 17:07:05.527101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.958 [2024-07-12 17:07:05.527131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.958 [2024-07-12 17:07:05.527143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.958 [2024-07-12 17:07:05.527153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.958 [2024-07-12 17:07:05.527180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.958 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.958 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.958 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.958 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.958 17:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.215 17:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.215 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:18:06.215 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.D3xpQwu1bD 00:18:06.215 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.215 [2024-07-12 17:07:05.896666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.473 17:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.473 17:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.038 [2024-07-12 17:07:06.434088] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.038 [2024-07-12 17:07:06.434304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.038 17:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:07.038 malloc0 00:18:07.038 17:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:07.296 17:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:18:07.554 [2024-07-12 17:07:07.190096] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1147369 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1147369 /var/tmp/bdevperf.sock 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1147369 ']' 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.554 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.554 [2024-07-12 17:07:07.243938] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:07.554 [2024-07-12 17:07:07.244012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147369 ] 00:18:07.812 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.812 [2024-07-12 17:07:07.300967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.812 [2024-07-12 17:07:07.408173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.069 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.070 17:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:08.070 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:18:08.070 [2024-07-12 17:07:07.759445] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.070 [2024-07-12 17:07:07.759568] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:08.326 TLSTESTn1 00:18:08.326 17:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:08.583 17:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:08.583 "subsystems": [ 00:18:08.583 { 00:18:08.583 "subsystem": "keyring", 00:18:08.583 "config": [] 00:18:08.583 }, 00:18:08.583 { 00:18:08.583 "subsystem": "iobuf", 00:18:08.583 "config": [ 00:18:08.583 { 00:18:08.583 "method": "iobuf_set_options", 00:18:08.583 "params": { 00:18:08.583 "small_pool_count": 8192, 00:18:08.583 "large_pool_count": 1024, 00:18:08.583 "small_bufsize": 8192, 00:18:08.583 "large_bufsize": 135168 00:18:08.583 } 00:18:08.583 } 00:18:08.583 ] 00:18:08.583 }, 00:18:08.583 { 00:18:08.583 "subsystem": "sock", 00:18:08.583 "config": [ 00:18:08.583 { 00:18:08.583 "method": "sock_set_default_impl", 00:18:08.583 "params": { 00:18:08.583 "impl_name": "posix" 00:18:08.583 } 00:18:08.583 }, 00:18:08.583 { 00:18:08.584 "method": "sock_impl_set_options", 00:18:08.584 "params": { 00:18:08.584 "impl_name": "ssl", 00:18:08.584 "recv_buf_size": 4096, 00:18:08.584 "send_buf_size": 4096, 00:18:08.584 "enable_recv_pipe": true, 00:18:08.584 "enable_quickack": false, 00:18:08.584 "enable_placement_id": 0, 00:18:08.584 "enable_zerocopy_send_server": true, 00:18:08.584 "enable_zerocopy_send_client": false, 00:18:08.584 "zerocopy_threshold": 0, 00:18:08.584 "tls_version": 0, 00:18:08.584 "enable_ktls": false 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "sock_impl_set_options", 00:18:08.584 "params": { 00:18:08.584 "impl_name": "posix", 00:18:08.584 "recv_buf_size": 2097152, 00:18:08.584 "send_buf_size": 2097152, 00:18:08.584 "enable_recv_pipe": true, 00:18:08.584 "enable_quickack": false, 00:18:08.584 "enable_placement_id": 0, 00:18:08.584 "enable_zerocopy_send_server": true, 00:18:08.584 "enable_zerocopy_send_client": false, 00:18:08.584 "zerocopy_threshold": 0, 00:18:08.584 "tls_version": 0, 00:18:08.584 "enable_ktls": false 00:18:08.584 } 00:18:08.584 } 00:18:08.584 ] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "vmd", 00:18:08.584 "config": [] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "accel", 00:18:08.584 "config": [ 00:18:08.584 { 00:18:08.584 "method": "accel_set_options", 00:18:08.584 "params": { 00:18:08.584 "small_cache_size": 128, 00:18:08.584 "large_cache_size": 16, 00:18:08.584 "task_count": 2048, 00:18:08.584 "sequence_count": 2048, 00:18:08.584 "buf_count": 2048 00:18:08.584 } 00:18:08.584 } 00:18:08.584 ] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "bdev", 00:18:08.584 "config": [ 00:18:08.584 { 00:18:08.584 "method": "bdev_set_options", 00:18:08.584 "params": { 00:18:08.584 "bdev_io_pool_size": 65535, 00:18:08.584 "bdev_io_cache_size": 256, 00:18:08.584 "bdev_auto_examine": true, 00:18:08.584 "iobuf_small_cache_size": 128, 00:18:08.584 "iobuf_large_cache_size": 16 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_raid_set_options", 00:18:08.584 "params": { 00:18:08.584 "process_window_size_kb": 1024 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_iscsi_set_options", 00:18:08.584 "params": { 00:18:08.584 "timeout_sec": 30 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_nvme_set_options", 00:18:08.584 "params": { 00:18:08.584 "action_on_timeout": "none", 00:18:08.584 "timeout_us": 0, 00:18:08.584 "timeout_admin_us": 0, 00:18:08.584 "keep_alive_timeout_ms": 10000, 00:18:08.584 "arbitration_burst": 0, 00:18:08.584 "low_priority_weight": 0, 00:18:08.584 "medium_priority_weight": 0, 00:18:08.584 "high_priority_weight": 0, 00:18:08.584 "nvme_adminq_poll_period_us": 10000, 00:18:08.584 "nvme_ioq_poll_period_us": 0, 00:18:08.584 "io_queue_requests": 0, 00:18:08.584 "delay_cmd_submit": true, 00:18:08.584 "transport_retry_count": 4, 00:18:08.584 "bdev_retry_count": 3, 00:18:08.584 "transport_ack_timeout": 0, 00:18:08.584 "ctrlr_loss_timeout_sec": 0, 00:18:08.584 "reconnect_delay_sec": 0, 00:18:08.584 "fast_io_fail_timeout_sec": 0, 00:18:08.584 "disable_auto_failback": false, 00:18:08.584 "generate_uuids": false, 00:18:08.584 "transport_tos": 0, 00:18:08.584 "nvme_error_stat": false, 00:18:08.584 "rdma_srq_size": 0, 00:18:08.584 "io_path_stat": false, 00:18:08.584 "allow_accel_sequence": false, 00:18:08.584 "rdma_max_cq_size": 0, 00:18:08.584 "rdma_cm_event_timeout_ms": 0, 00:18:08.584 "dhchap_digests": [ 00:18:08.584 "sha256", 00:18:08.584 "sha384", 00:18:08.584 "sha512" 00:18:08.584 ], 00:18:08.584 "dhchap_dhgroups": [ 00:18:08.584 "null", 00:18:08.584 "ffdhe2048", 00:18:08.584 "ffdhe3072", 00:18:08.584 "ffdhe4096", 00:18:08.584 "ffdhe6144", 00:18:08.584 "ffdhe8192" 00:18:08.584 ] 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_nvme_set_hotplug", 00:18:08.584 "params": { 00:18:08.584 "period_us": 100000, 00:18:08.584 "enable": false 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_malloc_create", 00:18:08.584 "params": { 00:18:08.584 "name": "malloc0", 00:18:08.584 "num_blocks": 8192, 00:18:08.584 "block_size": 4096, 00:18:08.584 "physical_block_size": 4096, 00:18:08.584 "uuid": "0abd7645-6ac9-4a59-a5ad-cbab43a8f10b", 00:18:08.584 "optimal_io_boundary": 0 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "bdev_wait_for_examine" 00:18:08.584 } 00:18:08.584 ] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "nbd", 00:18:08.584 "config": [] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "scheduler", 00:18:08.584 "config": [ 00:18:08.584 { 00:18:08.584 "method": "framework_set_scheduler", 00:18:08.584 "params": { 00:18:08.584 "name": "static" 00:18:08.584 } 00:18:08.584 } 00:18:08.584 ] 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "subsystem": "nvmf", 00:18:08.584 "config": [ 00:18:08.584 { 00:18:08.584 "method": "nvmf_set_config", 00:18:08.584 "params": { 00:18:08.584 "discovery_filter": "match_any", 00:18:08.584 "admin_cmd_passthru": { 00:18:08.584 "identify_ctrlr": false 00:18:08.584 } 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_set_max_subsystems", 00:18:08.584 "params": { 00:18:08.584 "max_subsystems": 1024 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_set_crdt", 00:18:08.584 "params": { 00:18:08.584 "crdt1": 0, 00:18:08.584 "crdt2": 0, 00:18:08.584 "crdt3": 0 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_create_transport", 00:18:08.584 "params": { 00:18:08.584 "trtype": "TCP", 00:18:08.584 "max_queue_depth": 128, 00:18:08.584 "max_io_qpairs_per_ctrlr": 127, 00:18:08.584 "in_capsule_data_size": 4096, 00:18:08.584 "max_io_size": 131072, 00:18:08.584 "io_unit_size": 131072, 00:18:08.584 "max_aq_depth": 128, 00:18:08.584 "num_shared_buffers": 511, 00:18:08.584 "buf_cache_size": 4294967295, 00:18:08.584 "dif_insert_or_strip": false, 00:18:08.584 "zcopy": false, 00:18:08.584 "c2h_success": false, 00:18:08.584 "sock_priority": 0, 00:18:08.584 "abort_timeout_sec": 1, 00:18:08.584 "ack_timeout": 0, 00:18:08.584 "data_wr_pool_size": 0 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_create_subsystem", 00:18:08.584 "params": { 00:18:08.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.584 "allow_any_host": false, 00:18:08.584 "serial_number": "SPDK00000000000001", 00:18:08.584 "model_number": "SPDK bdev Controller", 00:18:08.584 "max_namespaces": 10, 00:18:08.584 "min_cntlid": 1, 00:18:08.584 "max_cntlid": 65519, 00:18:08.584 "ana_reporting": false 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_subsystem_add_host", 00:18:08.584 "params": { 00:18:08.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.584 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.584 "psk": "/tmp/tmp.D3xpQwu1bD" 00:18:08.584 } 00:18:08.584 }, 00:18:08.584 { 00:18:08.584 "method": "nvmf_subsystem_add_ns", 00:18:08.584 "params": { 00:18:08.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.584 "namespace": { 00:18:08.584 "nsid": 1, 00:18:08.584 "bdev_name": "malloc0", 00:18:08.584 "nguid": "0ABD76456AC94A59A5ADCBAB43A8F10B", 00:18:08.585 "uuid": "0abd7645-6ac9-4a59-a5ad-cbab43a8f10b", 00:18:08.585 "no_auto_visible": false 00:18:08.585 } 00:18:08.585 } 00:18:08.585 }, 00:18:08.585 { 00:18:08.585 "method": "nvmf_subsystem_add_listener", 00:18:08.585 "params": { 00:18:08.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.585 "listen_address": { 00:18:08.585 "trtype": "TCP", 00:18:08.585 "adrfam": "IPv4", 00:18:08.585 "traddr": "10.0.0.2", 00:18:08.585 "trsvcid": "4420" 00:18:08.585 }, 00:18:08.585 "secure_channel": true 00:18:08.585 } 00:18:08.585 } 00:18:08.585 ] 00:18:08.585 } 00:18:08.585 ] 00:18:08.585 }' 00:18:08.585 17:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:09.149 17:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:09.149 "subsystems": [ 00:18:09.149 { 00:18:09.149 "subsystem": "keyring", 00:18:09.149 "config": [] 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "subsystem": "iobuf", 00:18:09.149 "config": [ 00:18:09.149 { 00:18:09.149 "method": "iobuf_set_options", 00:18:09.149 "params": { 00:18:09.149 "small_pool_count": 8192, 00:18:09.149 "large_pool_count": 1024, 00:18:09.149 "small_bufsize": 8192, 00:18:09.149 "large_bufsize": 135168 00:18:09.149 } 00:18:09.149 } 00:18:09.149 ] 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "subsystem": "sock", 00:18:09.149 "config": [ 00:18:09.149 { 00:18:09.149 "method": "sock_set_default_impl", 00:18:09.149 "params": { 00:18:09.149 "impl_name": "posix" 00:18:09.149 } 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "method": "sock_impl_set_options", 00:18:09.149 "params": { 00:18:09.149 "impl_name": "ssl", 00:18:09.149 "recv_buf_size": 4096, 00:18:09.149 "send_buf_size": 4096, 00:18:09.149 "enable_recv_pipe": true, 00:18:09.149 "enable_quickack": false, 00:18:09.149 "enable_placement_id": 0, 00:18:09.149 "enable_zerocopy_send_server": true, 00:18:09.149 "enable_zerocopy_send_client": false, 00:18:09.149 "zerocopy_threshold": 0, 00:18:09.149 "tls_version": 0, 00:18:09.149 "enable_ktls": false 00:18:09.149 } 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "method": "sock_impl_set_options", 00:18:09.149 "params": { 00:18:09.149 "impl_name": "posix", 00:18:09.149 "recv_buf_size": 2097152, 00:18:09.149 "send_buf_size": 2097152, 00:18:09.149 "enable_recv_pipe": true, 00:18:09.149 "enable_quickack": false, 00:18:09.149 "enable_placement_id": 0, 00:18:09.149 "enable_zerocopy_send_server": true, 00:18:09.149 "enable_zerocopy_send_client": false, 00:18:09.149 "zerocopy_threshold": 0, 00:18:09.149 "tls_version": 0, 00:18:09.149 "enable_ktls": false 00:18:09.149 } 00:18:09.149 } 00:18:09.149 ] 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "subsystem": "vmd", 00:18:09.149 "config": [] 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "subsystem": "accel", 00:18:09.149 "config": [ 00:18:09.149 { 00:18:09.149 "method": "accel_set_options", 00:18:09.149 "params": { 00:18:09.149 "small_cache_size": 128, 00:18:09.149 "large_cache_size": 16, 00:18:09.149 "task_count": 2048, 00:18:09.149 "sequence_count": 2048, 00:18:09.149 "buf_count": 2048 00:18:09.149 } 00:18:09.149 } 00:18:09.149 ] 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "subsystem": "bdev", 00:18:09.149 "config": [ 00:18:09.149 { 00:18:09.149 "method": "bdev_set_options", 00:18:09.149 "params": { 00:18:09.149 "bdev_io_pool_size": 65535, 00:18:09.149 "bdev_io_cache_size": 256, 00:18:09.149 "bdev_auto_examine": true, 00:18:09.149 "iobuf_small_cache_size": 128, 00:18:09.149 "iobuf_large_cache_size": 16 00:18:09.149 } 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "method": "bdev_raid_set_options", 00:18:09.149 "params": { 00:18:09.149 "process_window_size_kb": 1024 00:18:09.149 } 00:18:09.149 }, 00:18:09.149 { 00:18:09.149 "method": "bdev_iscsi_set_options", 00:18:09.149 "params": { 00:18:09.149 "timeout_sec": 30 00:18:09.149 } 00:18:09.150 }, 00:18:09.150 { 00:18:09.150 "method": "bdev_nvme_set_options", 00:18:09.150 "params": { 00:18:09.150 "action_on_timeout": "none", 00:18:09.150 "timeout_us": 0, 00:18:09.150 "timeout_admin_us": 0, 00:18:09.150 "keep_alive_timeout_ms": 10000, 00:18:09.150 "arbitration_burst": 0, 00:18:09.150 "low_priority_weight": 0, 00:18:09.150 "medium_priority_weight": 0, 00:18:09.150 "high_priority_weight": 0, 00:18:09.150 "nvme_adminq_poll_period_us": 10000, 00:18:09.150 "nvme_ioq_poll_period_us": 0, 00:18:09.150 "io_queue_requests": 512, 00:18:09.150 "delay_cmd_submit": true, 00:18:09.150 "transport_retry_count": 4, 00:18:09.150 "bdev_retry_count": 3, 00:18:09.150 "transport_ack_timeout": 0, 00:18:09.150 "ctrlr_loss_timeout_sec": 0, 00:18:09.150 "reconnect_delay_sec": 0, 00:18:09.150 "fast_io_fail_timeout_sec": 0, 00:18:09.150 "disable_auto_failback": false, 00:18:09.150 "generate_uuids": false, 00:18:09.150 "transport_tos": 0, 00:18:09.150 "nvme_error_stat": false, 00:18:09.150 "rdma_srq_size": 0, 00:18:09.150 "io_path_stat": false, 00:18:09.150 "allow_accel_sequence": false, 00:18:09.150 "rdma_max_cq_size": 0, 00:18:09.150 "rdma_cm_event_timeout_ms": 0, 00:18:09.150 "dhchap_digests": [ 00:18:09.150 "sha256", 00:18:09.150 "sha384", 00:18:09.150 "sha512" 00:18:09.150 ], 00:18:09.150 "dhchap_dhgroups": [ 00:18:09.150 "null", 00:18:09.150 "ffdhe2048", 00:18:09.150 "ffdhe3072", 00:18:09.150 "ffdhe4096", 00:18:09.150 "ffdhe6144", 00:18:09.150 "ffdhe8192" 00:18:09.150 ] 00:18:09.150 } 00:18:09.150 }, 00:18:09.150 { 00:18:09.150 "method": "bdev_nvme_attach_controller", 00:18:09.150 "params": { 00:18:09.150 "name": "TLSTEST", 00:18:09.150 "trtype": "TCP", 00:18:09.150 "adrfam": "IPv4", 00:18:09.150 "traddr": "10.0.0.2", 00:18:09.150 "trsvcid": "4420", 00:18:09.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.150 "prchk_reftag": false, 00:18:09.150 "prchk_guard": false, 00:18:09.150 "ctrlr_loss_timeout_sec": 0, 00:18:09.150 "reconnect_delay_sec": 0, 00:18:09.150 "fast_io_fail_timeout_sec": 0, 00:18:09.150 "psk": "/tmp/tmp.D3xpQwu1bD", 00:18:09.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.150 "hdgst": false, 00:18:09.150 "ddgst": false 00:18:09.150 } 00:18:09.150 }, 00:18:09.150 { 00:18:09.150 "method": "bdev_nvme_set_hotplug", 00:18:09.150 "params": { 00:18:09.150 "period_us": 100000, 00:18:09.150 "enable": false 00:18:09.150 } 00:18:09.150 }, 00:18:09.150 { 00:18:09.150 "method": "bdev_wait_for_examine" 00:18:09.150 } 00:18:09.150 ] 00:18:09.150 }, 00:18:09.150 { 00:18:09.150 "subsystem": "nbd", 00:18:09.150 "config": [] 00:18:09.150 } 00:18:09.150 ] 00:18:09.150 }' 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1147369 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1147369 ']' 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1147369 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147369 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147369' 00:18:09.150 killing process with pid 1147369 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1147369 00:18:09.150 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.150 00:18:09.150 Latency(us) 00:18:09.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.150 =================================================================================================================== 00:18:09.150 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.150 [2024-07-12 17:07:08.592644] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.150 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1147369 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1147084 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1147084 ']' 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1147084 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147084 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147084' 00:18:09.407 killing process with pid 1147084 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1147084 00:18:09.407 [2024-07-12 17:07:08.882629] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:09.407 17:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1147084 00:18:09.666 17:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:09.666 17:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.666 17:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:09.666 "subsystems": [ 00:18:09.666 { 00:18:09.666 "subsystem": "keyring", 00:18:09.666 "config": [] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "iobuf", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "iobuf_set_options", 00:18:09.666 "params": { 00:18:09.666 "small_pool_count": 8192, 00:18:09.666 "large_pool_count": 1024, 00:18:09.666 "small_bufsize": 8192, 00:18:09.666 "large_bufsize": 135168 00:18:09.666 } 00:18:09.666 } 00:18:09.666 ] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "sock", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "sock_set_default_impl", 00:18:09.666 "params": { 00:18:09.666 "impl_name": "posix" 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "sock_impl_set_options", 00:18:09.666 "params": { 00:18:09.666 "impl_name": "ssl", 00:18:09.666 "recv_buf_size": 4096, 00:18:09.666 "send_buf_size": 4096, 00:18:09.666 "enable_recv_pipe": true, 00:18:09.666 "enable_quickack": false, 00:18:09.666 "enable_placement_id": 0, 00:18:09.666 "enable_zerocopy_send_server": true, 00:18:09.666 "enable_zerocopy_send_client": false, 00:18:09.666 "zerocopy_threshold": 0, 00:18:09.666 "tls_version": 0, 00:18:09.666 "enable_ktls": false 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "sock_impl_set_options", 00:18:09.666 "params": { 00:18:09.666 "impl_name": "posix", 00:18:09.666 "recv_buf_size": 2097152, 00:18:09.666 "send_buf_size": 2097152, 00:18:09.666 "enable_recv_pipe": true, 00:18:09.666 "enable_quickack": false, 00:18:09.666 "enable_placement_id": 0, 00:18:09.666 "enable_zerocopy_send_server": true, 00:18:09.666 "enable_zerocopy_send_client": false, 00:18:09.666 "zerocopy_threshold": 0, 00:18:09.666 "tls_version": 0, 00:18:09.666 "enable_ktls": false 00:18:09.666 } 00:18:09.666 } 00:18:09.666 ] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "vmd", 00:18:09.666 "config": [] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "accel", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "accel_set_options", 00:18:09.666 "params": { 00:18:09.666 "small_cache_size": 128, 00:18:09.666 "large_cache_size": 16, 00:18:09.666 "task_count": 2048, 00:18:09.666 "sequence_count": 2048, 00:18:09.666 "buf_count": 2048 00:18:09.666 } 00:18:09.666 } 00:18:09.666 ] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "bdev", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "bdev_set_options", 00:18:09.666 "params": { 00:18:09.666 "bdev_io_pool_size": 65535, 00:18:09.666 "bdev_io_cache_size": 256, 00:18:09.666 "bdev_auto_examine": true, 00:18:09.666 "iobuf_small_cache_size": 128, 00:18:09.666 "iobuf_large_cache_size": 16 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_raid_set_options", 00:18:09.666 "params": { 00:18:09.666 "process_window_size_kb": 1024 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_iscsi_set_options", 00:18:09.666 "params": { 00:18:09.666 "timeout_sec": 30 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_nvme_set_options", 00:18:09.666 "params": { 00:18:09.666 "action_on_timeout": "none", 00:18:09.666 "timeout_us": 0, 00:18:09.666 "timeout_admin_us": 0, 00:18:09.666 "keep_alive_timeout_ms": 10000, 00:18:09.666 "arbitration_burst": 0, 00:18:09.666 "low_priority_weight": 0, 00:18:09.666 "medium_priority_weight": 0, 00:18:09.666 "high_priority_weight": 0, 00:18:09.666 "nvme_adminq_poll_period_us": 10000, 00:18:09.666 "nvme_ioq_poll_period_us": 0, 00:18:09.666 "io_queue_requests": 0, 00:18:09.666 "delay_cmd_submit": true, 00:18:09.666 "transport_retry_count": 4, 00:18:09.666 "bdev_retry_count": 3, 00:18:09.666 "transport_ack_timeout": 0, 00:18:09.666 "ctrlr_loss_timeout_sec": 0, 00:18:09.666 "reconnect_delay_sec": 0, 00:18:09.666 "fast_io_fail_timeout_sec": 0, 00:18:09.666 "disable_auto_failback": false, 00:18:09.666 "generate_uuids": false, 00:18:09.666 "transport_tos": 0, 00:18:09.666 "nvme_error_stat": false, 00:18:09.666 "rdma_srq_size": 0, 00:18:09.666 "io_path_stat": false, 00:18:09.666 "allow_accel_sequence": false, 00:18:09.666 "rdma_max_cq_size": 0, 00:18:09.666 "rdma_cm_event_timeout_ms": 0, 00:18:09.666 "dhchap_digests": [ 00:18:09.666 "sha256", 00:18:09.666 "sha384", 00:18:09.666 "sha512" 00:18:09.666 ], 00:18:09.666 "dhchap_dhgroups": [ 00:18:09.666 "null", 00:18:09.666 "ffdhe2048", 00:18:09.666 "ffdhe3072", 00:18:09.666 "ffdhe4096", 00:18:09.666 "ffdhe6144", 00:18:09.666 "ffdhe8192" 00:18:09.666 ] 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_nvme_set_hotplug", 00:18:09.666 "params": { 00:18:09.666 "period_us": 100000, 00:18:09.666 "enable": false 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_malloc_create", 00:18:09.666 "params": { 00:18:09.666 "name": "malloc0", 00:18:09.666 "num_blocks": 8192, 00:18:09.666 "block_size": 4096, 00:18:09.666 "physical_block_size": 4096, 00:18:09.666 "uuid": "0abd7645-6ac9-4a59-a5ad-cbab43a8f10b", 00:18:09.666 "optimal_io_boundary": 0 00:18:09.666 } 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "method": "bdev_wait_for_examine" 00:18:09.666 } 00:18:09.666 ] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "nbd", 00:18:09.666 "config": [] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "scheduler", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "framework_set_scheduler", 00:18:09.666 "params": { 00:18:09.666 "name": "static" 00:18:09.666 } 00:18:09.666 } 00:18:09.666 ] 00:18:09.666 }, 00:18:09.666 { 00:18:09.666 "subsystem": "nvmf", 00:18:09.666 "config": [ 00:18:09.666 { 00:18:09.666 "method": "nvmf_set_config", 00:18:09.666 "params": { 00:18:09.666 "discovery_filter": "match_any", 00:18:09.666 "admin_cmd_passthru": { 00:18:09.667 "identify_ctrlr": false 00:18:09.667 } 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_set_max_subsystems", 00:18:09.667 "params": { 00:18:09.667 "max_subsystems": 1024 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_set_crdt", 00:18:09.667 "params": { 00:18:09.667 "crdt1": 0, 00:18:09.667 "crdt2": 0, 00:18:09.667 "crdt3": 0 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_create_transport", 00:18:09.667 "params": { 00:18:09.667 "trtype": "TCP", 00:18:09.667 "max_queue_depth": 128, 00:18:09.667 "max_io_qpairs_per_ctrlr": 127, 00:18:09.667 "in_capsule_data_size": 4096, 00:18:09.667 "max_io_size": 131072, 00:18:09.667 "io_unit_size": 131072, 00:18:09.667 "max_aq_depth": 128, 00:18:09.667 "num_shared_buffers": 511, 00:18:09.667 "buf_cache_size": 4294967295, 00:18:09.667 "dif_insert_or_strip": false, 00:18:09.667 "zcopy": false, 00:18:09.667 "c2h_success": false, 00:18:09.667 "sock_priority": 0, 00:18:09.667 "abort_timeout_sec": 1, 00:18:09.667 "ack_timeout": 0, 00:18:09.667 "data_wr_pool_size": 0 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_create_subsystem", 00:18:09.667 "params": { 00:18:09.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.667 "allow_any_host": false, 00:18:09.667 "serial_number": "SPDK00000000000001", 00:18:09.667 "model_number": "SPDK bdev Controller", 00:18:09.667 "max_namespaces": 10, 00:18:09.667 "min_cntlid": 1, 00:18:09.667 "max_cntlid": 65519, 00:18:09.667 "ana_reporting": false 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_subsystem_add_host", 00:18:09.667 "params": { 00:18:09.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.667 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.667 "psk": "/tmp/tmp.D3xpQwu1bD" 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_subsystem_add_ns", 00:18:09.667 "params": { 00:18:09.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.667 "namespace": { 00:18:09.667 "nsid": 1, 00:18:09.667 "bdev_name": "malloc0", 00:18:09.667 "nguid": "0ABD76456AC94A59A5ADCBAB43A8F10B", 00:18:09.667 "uuid": "0abd7645-6ac9-4a59-a5ad-cbab43a8f10b", 00:18:09.667 "no_auto_visible": false 00:18:09.667 } 00:18:09.667 } 00:18:09.667 }, 00:18:09.667 { 00:18:09.667 "method": "nvmf_subsystem_add_listener", 00:18:09.667 "params": { 00:18:09.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.667 "listen_address": { 00:18:09.667 "trtype": "TCP", 00:18:09.667 "adrfam": "IPv4", 00:18:09.667 "traddr": "10.0.0.2", 00:18:09.667 "trsvcid": "4420" 00:18:09.667 }, 00:18:09.667 "secure_channel": true 00:18:09.667 } 00:18:09.667 } 00:18:09.667 ] 00:18:09.667 } 00:18:09.667 ] 00:18:09.667 }' 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1147642 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1147642 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1147642 ']' 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.667 17:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.667 [2024-07-12 17:07:09.209375] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:09.667 [2024-07-12 17:07:09.209453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.667 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.667 [2024-07-12 17:07:09.272008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.924 [2024-07-12 17:07:09.380623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.924 [2024-07-12 17:07:09.380680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.924 [2024-07-12 17:07:09.380710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.924 [2024-07-12 17:07:09.380722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.924 [2024-07-12 17:07:09.380732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.924 [2024-07-12 17:07:09.380827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.924 [2024-07-12 17:07:09.610195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.182 [2024-07-12 17:07:09.626166] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.182 [2024-07-12 17:07:09.642226] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.182 [2024-07-12 17:07:09.650931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1147795 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1147795 /var/tmp/bdevperf.sock 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1147795 ']' 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.748 17:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:10.748 "subsystems": [ 00:18:10.748 { 00:18:10.748 "subsystem": "keyring", 00:18:10.748 "config": [] 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "subsystem": "iobuf", 00:18:10.748 "config": [ 00:18:10.748 { 00:18:10.748 "method": "iobuf_set_options", 00:18:10.748 "params": { 00:18:10.748 "small_pool_count": 8192, 00:18:10.748 "large_pool_count": 1024, 00:18:10.748 "small_bufsize": 8192, 00:18:10.748 "large_bufsize": 135168 00:18:10.748 } 00:18:10.748 } 00:18:10.748 ] 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "subsystem": "sock", 00:18:10.748 "config": [ 00:18:10.748 { 00:18:10.748 "method": "sock_set_default_impl", 00:18:10.748 "params": { 00:18:10.748 "impl_name": "posix" 00:18:10.748 } 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "method": "sock_impl_set_options", 00:18:10.748 "params": { 00:18:10.748 "impl_name": "ssl", 00:18:10.748 "recv_buf_size": 4096, 00:18:10.748 "send_buf_size": 4096, 00:18:10.748 "enable_recv_pipe": true, 00:18:10.748 "enable_quickack": false, 00:18:10.748 "enable_placement_id": 0, 00:18:10.748 "enable_zerocopy_send_server": true, 00:18:10.748 "enable_zerocopy_send_client": false, 00:18:10.748 "zerocopy_threshold": 0, 00:18:10.748 "tls_version": 0, 00:18:10.748 "enable_ktls": false 00:18:10.748 } 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "method": "sock_impl_set_options", 00:18:10.748 "params": { 00:18:10.748 "impl_name": "posix", 00:18:10.748 "recv_buf_size": 2097152, 00:18:10.748 "send_buf_size": 2097152, 00:18:10.748 "enable_recv_pipe": true, 00:18:10.748 "enable_quickack": false, 00:18:10.748 "enable_placement_id": 0, 00:18:10.748 "enable_zerocopy_send_server": true, 00:18:10.748 "enable_zerocopy_send_client": false, 00:18:10.748 "zerocopy_threshold": 0, 00:18:10.748 "tls_version": 0, 00:18:10.748 "enable_ktls": false 00:18:10.748 } 00:18:10.748 } 00:18:10.748 ] 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "subsystem": "vmd", 00:18:10.748 "config": [] 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "subsystem": "accel", 00:18:10.748 "config": [ 00:18:10.748 { 00:18:10.748 "method": "accel_set_options", 00:18:10.748 "params": { 00:18:10.748 "small_cache_size": 128, 00:18:10.748 "large_cache_size": 16, 00:18:10.748 "task_count": 2048, 00:18:10.748 "sequence_count": 2048, 00:18:10.748 "buf_count": 2048 00:18:10.748 } 00:18:10.748 } 00:18:10.748 ] 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "subsystem": "bdev", 00:18:10.748 "config": [ 00:18:10.748 { 00:18:10.748 "method": "bdev_set_options", 00:18:10.748 "params": { 00:18:10.748 "bdev_io_pool_size": 65535, 00:18:10.748 "bdev_io_cache_size": 256, 00:18:10.748 "bdev_auto_examine": true, 00:18:10.748 "iobuf_small_cache_size": 128, 00:18:10.748 "iobuf_large_cache_size": 16 00:18:10.748 } 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "method": "bdev_raid_set_options", 00:18:10.748 "params": { 00:18:10.748 "process_window_size_kb": 1024 00:18:10.748 } 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "method": "bdev_iscsi_set_options", 00:18:10.748 "params": { 00:18:10.748 "timeout_sec": 30 00:18:10.748 } 00:18:10.748 }, 00:18:10.748 { 00:18:10.748 "method": "bdev_nvme_set_options", 00:18:10.748 "params": { 00:18:10.748 "action_on_timeout": "none", 00:18:10.748 "timeout_us": 0, 00:18:10.748 "timeout_admin_us": 0, 00:18:10.748 "keep_alive_timeout_ms": 10000, 00:18:10.748 "arbitration_burst": 0, 00:18:10.748 "low_priority_weight": 0, 00:18:10.749 "medium_priority_weight": 0, 00:18:10.749 "high_priority_weight": 0, 00:18:10.749 "nvme_adminq_poll_period_us": 10000, 00:18:10.749 "nvme_ioq_poll_period_us": 0, 00:18:10.749 "io_queue_requests": 512, 00:18:10.749 "delay_cmd_submit": true, 00:18:10.749 "transport_retry_count": 4, 00:18:10.749 "bdev_retry_count": 3, 00:18:10.749 "transport_ack_timeout": 0, 00:18:10.749 "ctrlr_loss_timeout_sec": 0, 00:18:10.749 "reconnect_delay_sec": 0, 00:18:10.749 "fast_io_fail_timeout_sec": 0, 00:18:10.749 "disable_auto_failback": false, 00:18:10.749 "generate_uuids": false, 00:18:10.749 "transport_tos": 0, 00:18:10.749 "nvme_error_stat": false, 00:18:10.749 "rdma_srq_size": 0, 00:18:10.749 "io_path_stat": false, 00:18:10.749 "allow_accel_sequence": false, 00:18:10.749 "rdma_max_cq_size": 0, 00:18:10.749 "rdma_cm_event_timeout_ms": 0, 00:18:10.749 "dhchap_digests": [ 00:18:10.749 "sha256", 00:18:10.749 "sha384", 00:18:10.749 "sha512" 00:18:10.749 ], 00:18:10.749 "dhchap_dhgroups": [ 00:18:10.749 "null", 00:18:10.749 "ffdhe2048", 00:18:10.749 "ffdhe3072", 00:18:10.749 "ffdhe4096", 00:18:10.749 "ffdhe6144", 00:18:10.749 "ffdhe8192" 00:18:10.749 ] 00:18:10.749 } 00:18:10.749 }, 00:18:10.749 { 00:18:10.749 "method": "bdev_nvme_attach_controller", 00:18:10.749 "params": { 00:18:10.749 "name": "TLSTEST", 00:18:10.749 "trtype": "TCP", 00:18:10.749 "adrfam": "IPv4", 00:18:10.749 "traddr": "10.0.0.2", 00:18:10.749 "trsvcid": "4420", 00:18:10.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.749 "prchk_reftag": false, 00:18:10.749 "prchk_guard": false, 00:18:10.749 "ctrlr_loss_timeout_sec": 0, 00:18:10.749 "reconnect_delay_sec": 0, 00:18:10.749 "fast_io_fail_timeout_sec": 0, 00:18:10.749 "psk": "/tmp/tmp.D3xpQwu1bD", 00:18:10.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.749 "hdgst": false, 00:18:10.749 "ddgst": false 00:18:10.749 } 00:18:10.749 }, 00:18:10.749 { 00:18:10.749 "method": "bdev_nvme_set_hotplug", 00:18:10.749 "params": { 00:18:10.749 "period_us": 100000, 00:18:10.749 "enable": false 00:18:10.749 } 00:18:10.749 }, 00:18:10.749 { 00:18:10.749 "method": "bdev_wait_for_examine" 00:18:10.749 } 00:18:10.749 ] 00:18:10.749 }, 00:18:10.749 { 00:18:10.749 "subsystem": "nbd", 00:18:10.749 "config": [] 00:18:10.749 } 00:18:10.749 ] 00:18:10.749 }' 00:18:10.749 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.749 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.749 17:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 [2024-07-12 17:07:10.278213] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:10.749 [2024-07-12 17:07:10.278295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147795 ] 00:18:10.749 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.749 [2024-07-12 17:07:10.338096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.007 [2024-07-12 17:07:10.450500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.007 [2024-07-12 17:07:10.620946] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.007 [2024-07-12 17:07:10.621138] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:11.939 17:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.939 17:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.939 17:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:11.939 Running I/O for 10 seconds... 00:18:21.919 00:18:21.919 Latency(us) 00:18:21.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.919 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:21.919 Verification LBA range: start 0x0 length 0x2000 00:18:21.919 TLSTESTn1 : 10.02 3565.57 13.93 0.00 0.00 35831.95 6602.15 37671.06 00:18:21.919 =================================================================================================================== 00:18:21.919 Total : 3565.57 13.93 0.00 0.00 35831.95 6602.15 37671.06 00:18:21.919 0 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1147795 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1147795 ']' 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1147795 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147795 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147795' 00:18:21.919 killing process with pid 1147795 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1147795 00:18:21.919 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.919 00:18:21.919 Latency(us) 00:18:21.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.919 =================================================================================================================== 00:18:21.919 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.919 [2024-07-12 17:07:21.479114] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:21.919 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1147795 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1147642 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1147642 ']' 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1147642 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1147642 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1147642' 00:18:22.208 killing process with pid 1147642 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1147642 00:18:22.208 [2024-07-12 17:07:21.773395] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.208 17:07:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1147642 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1149136 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1149136 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149136 ']' 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.495 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.495 [2024-07-12 17:07:22.102959] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:22.495 [2024-07-12 17:07:22.103040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.495 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.495 [2024-07-12 17:07:22.165965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.753 [2024-07-12 17:07:22.272426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.753 [2024-07-12 17:07:22.272479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.753 [2024-07-12 17:07:22.272508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.753 [2024-07-12 17:07:22.272519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.753 [2024-07-12 17:07:22.272529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.753 [2024-07-12 17:07:22.272555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.D3xpQwu1bD 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.D3xpQwu1bD 00:18:22.753 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.010 [2024-07-12 17:07:22.627646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.010 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.268 17:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.524 [2024-07-12 17:07:23.116932] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.524 [2024-07-12 17:07:23.117179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.524 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.780 malloc0 00:18:23.780 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.036 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.D3xpQwu1bD 00:18:24.293 [2024-07-12 17:07:23.856777] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1149413 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1149413 /var/tmp/bdevperf.sock 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149413 ']' 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.293 17:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 [2024-07-12 17:07:23.913105] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:24.293 [2024-07-12 17:07:23.913174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149413 ] 00:18:24.293 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.293 [2024-07-12 17:07:23.971971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.551 [2024-07-12 17:07:24.081122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.551 17:07:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.551 17:07:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.551 17:07:24 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D3xpQwu1bD 00:18:24.808 17:07:24 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:25.065 [2024-07-12 17:07:24.664599] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.065 nvme0n1 00:18:25.065 17:07:24 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.322 Running I/O for 1 seconds... 00:18:26.254 00:18:26.254 Latency(us) 00:18:26.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.254 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.254 Verification LBA range: start 0x0 length 0x2000 00:18:26.254 nvme0n1 : 1.02 3416.56 13.35 0.00 0.00 37086.60 6140.97 56312.41 00:18:26.254 =================================================================================================================== 00:18:26.254 Total : 3416.56 13.35 0.00 0.00 37086.60 6140.97 56312.41 00:18:26.254 0 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1149413 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149413 ']' 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149413 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149413 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149413' 00:18:26.254 killing process with pid 1149413 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149413 00:18:26.254 Received shutdown signal, test time was about 1.000000 seconds 00:18:26.254 00:18:26.254 Latency(us) 00:18:26.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.254 =================================================================================================================== 00:18:26.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.254 17:07:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149413 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1149136 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149136 ']' 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149136 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.512 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149136 00:18:26.770 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.770 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.770 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149136' 00:18:26.770 killing process with pid 1149136 00:18:26.770 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149136 00:18:26.770 [2024-07-12 17:07:26.214147] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:26.770 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149136 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1149691 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1149691 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149691 ']' 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.028 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.028 [2024-07-12 17:07:26.540201] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:27.029 [2024-07-12 17:07:26.540295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.029 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.029 [2024-07-12 17:07:26.605000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.029 [2024-07-12 17:07:26.714905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.029 [2024-07-12 17:07:26.714967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.029 [2024-07-12 17:07:26.714981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.029 [2024-07-12 17:07:26.714993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.029 [2024-07-12 17:07:26.715003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.029 [2024-07-12 17:07:26.715033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.286 [2024-07-12 17:07:26.851956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.286 malloc0 00:18:27.286 [2024-07-12 17:07:26.882875] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.286 [2024-07-12 17:07:26.883142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1149839 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1149839 /var/tmp/bdevperf.sock 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149839 ']' 00:18:27.286 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.287 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.287 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.287 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.287 17:07:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.287 [2024-07-12 17:07:26.950000] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:27.287 [2024-07-12 17:07:26.950076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149839 ] 00:18:27.287 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.544 [2024-07-12 17:07:27.007785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.544 [2024-07-12 17:07:27.112431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.544 17:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.544 17:07:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:27.544 17:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D3xpQwu1bD 00:18:27.802 17:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:28.060 [2024-07-12 17:07:27.677752] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.060 nvme0n1 00:18:28.317 17:07:27 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.317 Running I/O for 1 seconds... 00:18:29.249 00:18:29.249 Latency(us) 00:18:29.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.249 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:29.249 Verification LBA range: start 0x0 length 0x2000 00:18:29.249 nvme0n1 : 1.03 3430.56 13.40 0.00 0.00 36806.35 9757.58 43884.85 00:18:29.249 =================================================================================================================== 00:18:29.249 Total : 3430.56 13.40 0.00 0.00 36806.35 9757.58 43884.85 00:18:29.249 0 00:18:29.249 17:07:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:29.249 17:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.249 17:07:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.506 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.506 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:29.506 "subsystems": [ 00:18:29.506 { 00:18:29.506 "subsystem": "keyring", 00:18:29.506 "config": [ 00:18:29.506 { 00:18:29.506 "method": "keyring_file_add_key", 00:18:29.506 "params": { 00:18:29.506 "name": "key0", 00:18:29.506 "path": "/tmp/tmp.D3xpQwu1bD" 00:18:29.506 } 00:18:29.506 } 00:18:29.506 ] 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "subsystem": "iobuf", 00:18:29.506 "config": [ 00:18:29.506 { 00:18:29.506 "method": "iobuf_set_options", 00:18:29.506 "params": { 00:18:29.506 "small_pool_count": 8192, 00:18:29.506 "large_pool_count": 1024, 00:18:29.506 "small_bufsize": 8192, 00:18:29.506 "large_bufsize": 135168 00:18:29.506 } 00:18:29.506 } 00:18:29.506 ] 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "subsystem": "sock", 00:18:29.506 "config": [ 00:18:29.506 { 00:18:29.506 "method": "sock_set_default_impl", 00:18:29.506 "params": { 00:18:29.506 "impl_name": "posix" 00:18:29.506 } 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "method": "sock_impl_set_options", 00:18:29.506 "params": { 00:18:29.506 "impl_name": "ssl", 00:18:29.506 "recv_buf_size": 4096, 00:18:29.506 "send_buf_size": 4096, 00:18:29.506 "enable_recv_pipe": true, 00:18:29.506 "enable_quickack": false, 00:18:29.506 "enable_placement_id": 0, 00:18:29.506 "enable_zerocopy_send_server": true, 00:18:29.506 "enable_zerocopy_send_client": false, 00:18:29.506 "zerocopy_threshold": 0, 00:18:29.506 "tls_version": 0, 00:18:29.506 "enable_ktls": false 00:18:29.506 } 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "method": "sock_impl_set_options", 00:18:29.506 "params": { 00:18:29.506 "impl_name": "posix", 00:18:29.506 "recv_buf_size": 2097152, 00:18:29.506 "send_buf_size": 2097152, 00:18:29.506 "enable_recv_pipe": true, 00:18:29.506 "enable_quickack": false, 00:18:29.506 "enable_placement_id": 0, 00:18:29.506 "enable_zerocopy_send_server": true, 00:18:29.506 "enable_zerocopy_send_client": false, 00:18:29.506 "zerocopy_threshold": 0, 00:18:29.506 "tls_version": 0, 00:18:29.506 "enable_ktls": false 00:18:29.506 } 00:18:29.506 } 00:18:29.506 ] 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "subsystem": "vmd", 00:18:29.506 "config": [] 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "subsystem": "accel", 00:18:29.506 "config": [ 00:18:29.506 { 00:18:29.506 "method": "accel_set_options", 00:18:29.506 "params": { 00:18:29.506 "small_cache_size": 128, 00:18:29.506 "large_cache_size": 16, 00:18:29.506 "task_count": 2048, 00:18:29.506 "sequence_count": 2048, 00:18:29.506 "buf_count": 2048 00:18:29.506 } 00:18:29.506 } 00:18:29.506 ] 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "subsystem": "bdev", 00:18:29.506 "config": [ 00:18:29.506 { 00:18:29.506 "method": "bdev_set_options", 00:18:29.506 "params": { 00:18:29.506 "bdev_io_pool_size": 65535, 00:18:29.506 "bdev_io_cache_size": 256, 00:18:29.506 "bdev_auto_examine": true, 00:18:29.506 "iobuf_small_cache_size": 128, 00:18:29.506 "iobuf_large_cache_size": 16 00:18:29.506 } 00:18:29.506 }, 00:18:29.506 { 00:18:29.506 "method": "bdev_raid_set_options", 00:18:29.506 "params": { 00:18:29.506 "process_window_size_kb": 1024 00:18:29.506 } 00:18:29.506 }, 00:18:29.506 { 00:18:29.507 "method": "bdev_iscsi_set_options", 00:18:29.507 "params": { 00:18:29.507 "timeout_sec": 30 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "bdev_nvme_set_options", 00:18:29.507 "params": { 00:18:29.507 "action_on_timeout": "none", 00:18:29.507 "timeout_us": 0, 00:18:29.507 "timeout_admin_us": 0, 00:18:29.507 "keep_alive_timeout_ms": 10000, 00:18:29.507 "arbitration_burst": 0, 00:18:29.507 "low_priority_weight": 0, 00:18:29.507 "medium_priority_weight": 0, 00:18:29.507 "high_priority_weight": 0, 00:18:29.507 "nvme_adminq_poll_period_us": 10000, 00:18:29.507 "nvme_ioq_poll_period_us": 0, 00:18:29.507 "io_queue_requests": 0, 00:18:29.507 "delay_cmd_submit": true, 00:18:29.507 "transport_retry_count": 4, 00:18:29.507 "bdev_retry_count": 3, 00:18:29.507 "transport_ack_timeout": 0, 00:18:29.507 "ctrlr_loss_timeout_sec": 0, 00:18:29.507 "reconnect_delay_sec": 0, 00:18:29.507 "fast_io_fail_timeout_sec": 0, 00:18:29.507 "disable_auto_failback": false, 00:18:29.507 "generate_uuids": false, 00:18:29.507 "transport_tos": 0, 00:18:29.507 "nvme_error_stat": false, 00:18:29.507 "rdma_srq_size": 0, 00:18:29.507 "io_path_stat": false, 00:18:29.507 "allow_accel_sequence": false, 00:18:29.507 "rdma_max_cq_size": 0, 00:18:29.507 "rdma_cm_event_timeout_ms": 0, 00:18:29.507 "dhchap_digests": [ 00:18:29.507 "sha256", 00:18:29.507 "sha384", 00:18:29.507 "sha512" 00:18:29.507 ], 00:18:29.507 "dhchap_dhgroups": [ 00:18:29.507 "null", 00:18:29.507 "ffdhe2048", 00:18:29.507 "ffdhe3072", 00:18:29.507 "ffdhe4096", 00:18:29.507 "ffdhe6144", 00:18:29.507 "ffdhe8192" 00:18:29.507 ] 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "bdev_nvme_set_hotplug", 00:18:29.507 "params": { 00:18:29.507 "period_us": 100000, 00:18:29.507 "enable": false 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "bdev_malloc_create", 00:18:29.507 "params": { 00:18:29.507 "name": "malloc0", 00:18:29.507 "num_blocks": 8192, 00:18:29.507 "block_size": 4096, 00:18:29.507 "physical_block_size": 4096, 00:18:29.507 "uuid": "9e6cf933-2461-4ed4-80b2-f4e25a848808", 00:18:29.507 "optimal_io_boundary": 0 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "bdev_wait_for_examine" 00:18:29.507 } 00:18:29.507 ] 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "subsystem": "nbd", 00:18:29.507 "config": [] 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "subsystem": "scheduler", 00:18:29.507 "config": [ 00:18:29.507 { 00:18:29.507 "method": "framework_set_scheduler", 00:18:29.507 "params": { 00:18:29.507 "name": "static" 00:18:29.507 } 00:18:29.507 } 00:18:29.507 ] 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "subsystem": "nvmf", 00:18:29.507 "config": [ 00:18:29.507 { 00:18:29.507 "method": "nvmf_set_config", 00:18:29.507 "params": { 00:18:29.507 "discovery_filter": "match_any", 00:18:29.507 "admin_cmd_passthru": { 00:18:29.507 "identify_ctrlr": false 00:18:29.507 } 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_set_max_subsystems", 00:18:29.507 "params": { 00:18:29.507 "max_subsystems": 1024 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_set_crdt", 00:18:29.507 "params": { 00:18:29.507 "crdt1": 0, 00:18:29.507 "crdt2": 0, 00:18:29.507 "crdt3": 0 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_create_transport", 00:18:29.507 "params": { 00:18:29.507 "trtype": "TCP", 00:18:29.507 "max_queue_depth": 128, 00:18:29.507 "max_io_qpairs_per_ctrlr": 127, 00:18:29.507 "in_capsule_data_size": 4096, 00:18:29.507 "max_io_size": 131072, 00:18:29.507 "io_unit_size": 131072, 00:18:29.507 "max_aq_depth": 128, 00:18:29.507 "num_shared_buffers": 511, 00:18:29.507 "buf_cache_size": 4294967295, 00:18:29.507 "dif_insert_or_strip": false, 00:18:29.507 "zcopy": false, 00:18:29.507 "c2h_success": false, 00:18:29.507 "sock_priority": 0, 00:18:29.507 "abort_timeout_sec": 1, 00:18:29.507 "ack_timeout": 0, 00:18:29.507 "data_wr_pool_size": 0 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_create_subsystem", 00:18:29.507 "params": { 00:18:29.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.507 "allow_any_host": false, 00:18:29.507 "serial_number": "00000000000000000000", 00:18:29.507 "model_number": "SPDK bdev Controller", 00:18:29.507 "max_namespaces": 32, 00:18:29.507 "min_cntlid": 1, 00:18:29.507 "max_cntlid": 65519, 00:18:29.507 "ana_reporting": false 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_subsystem_add_host", 00:18:29.507 "params": { 00:18:29.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.507 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.507 "psk": "key0" 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_subsystem_add_ns", 00:18:29.507 "params": { 00:18:29.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.507 "namespace": { 00:18:29.507 "nsid": 1, 00:18:29.507 "bdev_name": "malloc0", 00:18:29.507 "nguid": "9E6CF93324614ED480B2F4E25A848808", 00:18:29.507 "uuid": "9e6cf933-2461-4ed4-80b2-f4e25a848808", 00:18:29.507 "no_auto_visible": false 00:18:29.507 } 00:18:29.507 } 00:18:29.507 }, 00:18:29.507 { 00:18:29.507 "method": "nvmf_subsystem_add_listener", 00:18:29.507 "params": { 00:18:29.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.507 "listen_address": { 00:18:29.507 "trtype": "TCP", 00:18:29.507 "adrfam": "IPv4", 00:18:29.507 "traddr": "10.0.0.2", 00:18:29.507 "trsvcid": "4420" 00:18:29.507 }, 00:18:29.507 "secure_channel": true 00:18:29.507 } 00:18:29.507 } 00:18:29.507 ] 00:18:29.507 } 00:18:29.507 ] 00:18:29.507 }' 00:18:29.507 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:29.765 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:29.765 "subsystems": [ 00:18:29.765 { 00:18:29.765 "subsystem": "keyring", 00:18:29.765 "config": [ 00:18:29.765 { 00:18:29.765 "method": "keyring_file_add_key", 00:18:29.765 "params": { 00:18:29.765 "name": "key0", 00:18:29.765 "path": "/tmp/tmp.D3xpQwu1bD" 00:18:29.765 } 00:18:29.765 } 00:18:29.765 ] 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "subsystem": "iobuf", 00:18:29.765 "config": [ 00:18:29.765 { 00:18:29.765 "method": "iobuf_set_options", 00:18:29.765 "params": { 00:18:29.765 "small_pool_count": 8192, 00:18:29.765 "large_pool_count": 1024, 00:18:29.765 "small_bufsize": 8192, 00:18:29.765 "large_bufsize": 135168 00:18:29.765 } 00:18:29.765 } 00:18:29.765 ] 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "subsystem": "sock", 00:18:29.765 "config": [ 00:18:29.765 { 00:18:29.765 "method": "sock_set_default_impl", 00:18:29.765 "params": { 00:18:29.765 "impl_name": "posix" 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "sock_impl_set_options", 00:18:29.765 "params": { 00:18:29.765 "impl_name": "ssl", 00:18:29.765 "recv_buf_size": 4096, 00:18:29.765 "send_buf_size": 4096, 00:18:29.765 "enable_recv_pipe": true, 00:18:29.765 "enable_quickack": false, 00:18:29.765 "enable_placement_id": 0, 00:18:29.765 "enable_zerocopy_send_server": true, 00:18:29.765 "enable_zerocopy_send_client": false, 00:18:29.765 "zerocopy_threshold": 0, 00:18:29.765 "tls_version": 0, 00:18:29.765 "enable_ktls": false 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "sock_impl_set_options", 00:18:29.765 "params": { 00:18:29.765 "impl_name": "posix", 00:18:29.765 "recv_buf_size": 2097152, 00:18:29.765 "send_buf_size": 2097152, 00:18:29.765 "enable_recv_pipe": true, 00:18:29.765 "enable_quickack": false, 00:18:29.765 "enable_placement_id": 0, 00:18:29.765 "enable_zerocopy_send_server": true, 00:18:29.765 "enable_zerocopy_send_client": false, 00:18:29.765 "zerocopy_threshold": 0, 00:18:29.765 "tls_version": 0, 00:18:29.765 "enable_ktls": false 00:18:29.765 } 00:18:29.765 } 00:18:29.765 ] 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "subsystem": "vmd", 00:18:29.765 "config": [] 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "subsystem": "accel", 00:18:29.765 "config": [ 00:18:29.765 { 00:18:29.765 "method": "accel_set_options", 00:18:29.765 "params": { 00:18:29.765 "small_cache_size": 128, 00:18:29.765 "large_cache_size": 16, 00:18:29.765 "task_count": 2048, 00:18:29.765 "sequence_count": 2048, 00:18:29.765 "buf_count": 2048 00:18:29.765 } 00:18:29.765 } 00:18:29.765 ] 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "subsystem": "bdev", 00:18:29.765 "config": [ 00:18:29.765 { 00:18:29.765 "method": "bdev_set_options", 00:18:29.765 "params": { 00:18:29.765 "bdev_io_pool_size": 65535, 00:18:29.765 "bdev_io_cache_size": 256, 00:18:29.765 "bdev_auto_examine": true, 00:18:29.765 "iobuf_small_cache_size": 128, 00:18:29.765 "iobuf_large_cache_size": 16 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "bdev_raid_set_options", 00:18:29.765 "params": { 00:18:29.765 "process_window_size_kb": 1024 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "bdev_iscsi_set_options", 00:18:29.765 "params": { 00:18:29.765 "timeout_sec": 30 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "bdev_nvme_set_options", 00:18:29.765 "params": { 00:18:29.765 "action_on_timeout": "none", 00:18:29.765 "timeout_us": 0, 00:18:29.765 "timeout_admin_us": 0, 00:18:29.765 "keep_alive_timeout_ms": 10000, 00:18:29.765 "arbitration_burst": 0, 00:18:29.765 "low_priority_weight": 0, 00:18:29.765 "medium_priority_weight": 0, 00:18:29.765 "high_priority_weight": 0, 00:18:29.765 "nvme_adminq_poll_period_us": 10000, 00:18:29.765 "nvme_ioq_poll_period_us": 0, 00:18:29.765 "io_queue_requests": 512, 00:18:29.765 "delay_cmd_submit": true, 00:18:29.765 "transport_retry_count": 4, 00:18:29.765 "bdev_retry_count": 3, 00:18:29.765 "transport_ack_timeout": 0, 00:18:29.765 "ctrlr_loss_timeout_sec": 0, 00:18:29.765 "reconnect_delay_sec": 0, 00:18:29.765 "fast_io_fail_timeout_sec": 0, 00:18:29.765 "disable_auto_failback": false, 00:18:29.765 "generate_uuids": false, 00:18:29.765 "transport_tos": 0, 00:18:29.765 "nvme_error_stat": false, 00:18:29.765 "rdma_srq_size": 0, 00:18:29.765 "io_path_stat": false, 00:18:29.765 "allow_accel_sequence": false, 00:18:29.765 "rdma_max_cq_size": 0, 00:18:29.765 "rdma_cm_event_timeout_ms": 0, 00:18:29.765 "dhchap_digests": [ 00:18:29.765 "sha256", 00:18:29.765 "sha384", 00:18:29.765 "sha512" 00:18:29.765 ], 00:18:29.765 "dhchap_dhgroups": [ 00:18:29.765 "null", 00:18:29.765 "ffdhe2048", 00:18:29.765 "ffdhe3072", 00:18:29.765 "ffdhe4096", 00:18:29.765 "ffdhe6144", 00:18:29.765 "ffdhe8192" 00:18:29.765 ] 00:18:29.765 } 00:18:29.765 }, 00:18:29.765 { 00:18:29.765 "method": "bdev_nvme_attach_controller", 00:18:29.765 "params": { 00:18:29.765 "name": "nvme0", 00:18:29.765 "trtype": "TCP", 00:18:29.765 "adrfam": "IPv4", 00:18:29.765 "traddr": "10.0.0.2", 00:18:29.765 "trsvcid": "4420", 00:18:29.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.766 "prchk_reftag": false, 00:18:29.766 "prchk_guard": false, 00:18:29.766 "ctrlr_loss_timeout_sec": 0, 00:18:29.766 "reconnect_delay_sec": 0, 00:18:29.766 "fast_io_fail_timeout_sec": 0, 00:18:29.766 "psk": "key0", 00:18:29.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.766 "hdgst": false, 00:18:29.766 "ddgst": false 00:18:29.766 } 00:18:29.766 }, 00:18:29.766 { 00:18:29.766 "method": "bdev_nvme_set_hotplug", 00:18:29.766 "params": { 00:18:29.766 "period_us": 100000, 00:18:29.766 "enable": false 00:18:29.766 } 00:18:29.766 }, 00:18:29.766 { 00:18:29.766 "method": "bdev_enable_histogram", 00:18:29.766 "params": { 00:18:29.766 "name": "nvme0n1", 00:18:29.766 "enable": true 00:18:29.766 } 00:18:29.766 }, 00:18:29.766 { 00:18:29.766 "method": "bdev_wait_for_examine" 00:18:29.766 } 00:18:29.766 ] 00:18:29.766 }, 00:18:29.766 { 00:18:29.766 "subsystem": "nbd", 00:18:29.766 "config": [] 00:18:29.766 } 00:18:29.766 ] 00:18:29.766 }' 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1149839 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149839 ']' 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149839 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149839 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149839' 00:18:29.766 killing process with pid 1149839 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149839 00:18:29.766 Received shutdown signal, test time was about 1.000000 seconds 00:18:29.766 00:18:29.766 Latency(us) 00:18:29.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.766 =================================================================================================================== 00:18:29.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.766 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149839 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1149691 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149691 ']' 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149691 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149691 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149691' 00:18:30.024 killing process with pid 1149691 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149691 00:18:30.024 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149691 00:18:30.283 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:30.283 17:07:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.283 17:07:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:30.283 "subsystems": [ 00:18:30.283 { 00:18:30.283 "subsystem": "keyring", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "keyring_file_add_key", 00:18:30.283 "params": { 00:18:30.283 "name": "key0", 00:18:30.283 "path": "/tmp/tmp.D3xpQwu1bD" 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "iobuf", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "iobuf_set_options", 00:18:30.283 "params": { 00:18:30.283 "small_pool_count": 8192, 00:18:30.283 "large_pool_count": 1024, 00:18:30.283 "small_bufsize": 8192, 00:18:30.283 "large_bufsize": 135168 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "sock", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "sock_set_default_impl", 00:18:30.283 "params": { 00:18:30.283 "impl_name": "posix" 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "sock_impl_set_options", 00:18:30.283 "params": { 00:18:30.283 "impl_name": "ssl", 00:18:30.283 "recv_buf_size": 4096, 00:18:30.283 "send_buf_size": 4096, 00:18:30.283 "enable_recv_pipe": true, 00:18:30.283 "enable_quickack": false, 00:18:30.283 "enable_placement_id": 0, 00:18:30.283 "enable_zerocopy_send_server": true, 00:18:30.283 "enable_zerocopy_send_client": false, 00:18:30.283 "zerocopy_threshold": 0, 00:18:30.283 "tls_version": 0, 00:18:30.283 "enable_ktls": false 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "sock_impl_set_options", 00:18:30.283 "params": { 00:18:30.283 "impl_name": "posix", 00:18:30.283 "recv_buf_size": 2097152, 00:18:30.283 "send_buf_size": 2097152, 00:18:30.283 "enable_recv_pipe": true, 00:18:30.283 "enable_quickack": false, 00:18:30.283 "enable_placement_id": 0, 00:18:30.283 "enable_zerocopy_send_server": true, 00:18:30.283 "enable_zerocopy_send_client": false, 00:18:30.283 "zerocopy_threshold": 0, 00:18:30.283 "tls_version": 0, 00:18:30.283 "enable_ktls": false 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "vmd", 00:18:30.283 "config": [] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "accel", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "accel_set_options", 00:18:30.283 "params": { 00:18:30.283 "small_cache_size": 128, 00:18:30.283 "large_cache_size": 16, 00:18:30.283 "task_count": 2048, 00:18:30.283 "sequence_count": 2048, 00:18:30.283 "buf_count": 2048 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "bdev", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "bdev_set_options", 00:18:30.283 "params": { 00:18:30.283 "bdev_io_pool_size": 65535, 00:18:30.283 "bdev_io_cache_size": 256, 00:18:30.283 "bdev_auto_examine": true, 00:18:30.283 "iobuf_small_cache_size": 128, 00:18:30.283 "iobuf_large_cache_size": 16 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_raid_set_options", 00:18:30.283 "params": { 00:18:30.283 "process_window_size_kb": 1024 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_iscsi_set_options", 00:18:30.283 "params": { 00:18:30.283 "timeout_sec": 30 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_nvme_set_options", 00:18:30.283 "params": { 00:18:30.283 "action_on_timeout": "none", 00:18:30.283 "timeout_us": 0, 00:18:30.283 "timeout_admin_us": 0, 00:18:30.283 "keep_alive_timeout_ms": 10000, 00:18:30.283 "arbitration_burst": 0, 00:18:30.283 "low_priority_weight": 0, 00:18:30.283 "medium_priority_weight": 0, 00:18:30.283 "high_priority_weight": 0, 00:18:30.283 "nvme_adminq_poll_period_us": 10000, 00:18:30.283 "nvme_ioq_poll_period_us": 0, 00:18:30.283 "io_queue_requests": 0, 00:18:30.283 "delay_cmd_submit": true, 00:18:30.283 "transport_retry_count": 4, 00:18:30.283 "bdev_retry_count": 3, 00:18:30.283 "transport_ack_timeout": 0, 00:18:30.283 "ctrlr_loss_timeout_sec": 0, 00:18:30.283 "reconnect_delay_sec": 0, 00:18:30.283 "fast_io_fail_timeout_sec": 0, 00:18:30.283 "disable_auto_failback": false, 00:18:30.283 "generate_uuids": false, 00:18:30.283 "transport_tos": 0, 00:18:30.283 "nvme_error_stat": false, 00:18:30.283 "rdma_srq_size": 0, 00:18:30.283 "io_path_stat": false, 00:18:30.283 "allow_accel_sequence": false, 00:18:30.283 "rdma_max_cq_size": 0, 00:18:30.283 "rdma_cm_event_timeout_ms": 0, 00:18:30.283 "dhchap_digests": [ 00:18:30.283 "sha256", 00:18:30.283 "sha384", 00:18:30.283 "sha512" 00:18:30.283 ], 00:18:30.283 "dhchap_dhgroups": [ 00:18:30.283 "null", 00:18:30.283 "ffdhe2048", 00:18:30.283 "ffdhe3072", 00:18:30.283 "ffdhe4096", 00:18:30.283 "ffdhe6144", 00:18:30.283 "ffdhe8192" 00:18:30.283 ] 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_nvme_set_hotplug", 00:18:30.283 "params": { 00:18:30.283 "period_us": 100000, 00:18:30.283 "enable": false 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_malloc_create", 00:18:30.283 "params": { 00:18:30.283 "name": "malloc0", 00:18:30.283 "num_blocks": 8192, 00:18:30.283 "block_size": 4096, 00:18:30.283 "physical_block_size": 4096, 00:18:30.283 "uuid": "9e6cf933-2461-4ed4-80b2-f4e25a848808", 00:18:30.283 "optimal_io_boundary": 0 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "bdev_wait_for_examine" 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "nbd", 00:18:30.283 "config": [] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "scheduler", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "framework_set_scheduler", 00:18:30.283 "params": { 00:18:30.283 "name": "static" 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ] 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "subsystem": "nvmf", 00:18:30.283 "config": [ 00:18:30.283 { 00:18:30.283 "method": "nvmf_set_config", 00:18:30.283 "params": { 00:18:30.283 "discovery_filter": "match_any", 00:18:30.283 "admin_cmd_passthru": { 00:18:30.283 "identify_ctrlr": false 00:18:30.283 } 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "nvmf_set_max_subsystems", 00:18:30.283 "params": { 00:18:30.283 "max_subsystems": 1024 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "nvmf_set_crdt", 00:18:30.283 "params": { 00:18:30.283 "crdt1": 0, 00:18:30.283 "crdt2": 0, 00:18:30.283 "crdt3": 0 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "nvmf_create_transport", 00:18:30.283 "params": { 00:18:30.283 "trtype": "TCP", 00:18:30.283 "max_queue_depth": 128, 00:18:30.283 "max_io_qpairs_per_ctrlr": 127, 00:18:30.283 "in_capsule_data_size": 4096, 00:18:30.283 "max_io_size": 131072, 00:18:30.283 "io_unit_size": 131072, 00:18:30.283 "max_aq_depth": 128, 00:18:30.283 "num_shared_buffers": 511, 00:18:30.283 "buf_cache_size": 4294967295, 00:18:30.283 "dif_insert_or_strip": false, 00:18:30.283 "zcopy": false, 00:18:30.283 "c2h_success": false, 00:18:30.283 "sock_priority": 0, 00:18:30.283 "abort_timeout_sec": 1, 00:18:30.283 "ack_timeout": 0, 00:18:30.283 "data_wr_pool_size": 0 00:18:30.283 } 00:18:30.283 }, 00:18:30.283 { 00:18:30.283 "method": "nvmf_create_subsystem", 00:18:30.283 "params": { 00:18:30.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.283 "allow_any_host": false, 00:18:30.283 "serial_number": "00000000000000000000", 00:18:30.284 "model_number": "SPDK bdev Controller", 00:18:30.284 "max_namespaces": 32, 00:18:30.284 "min_cntlid": 1, 00:18:30.284 "max_cntlid": 65519, 00:18:30.284 "ana_reporting": false 00:18:30.284 } 00:18:30.284 }, 00:18:30.284 { 00:18:30.284 "method": "nvmf_subsystem_add_host", 00:18:30.284 "params": { 00:18:30.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.284 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.284 "psk": "key0" 00:18:30.284 } 00:18:30.284 }, 00:18:30.284 { 00:18:30.284 "method": "nvmf_subsystem_add_ns", 00:18:30.284 "params": { 00:18:30.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.284 "namespace": { 00:18:30.284 "nsid": 1, 00:18:30.284 "bdev_name": "malloc0", 00:18:30.284 "nguid": "9E6CF93324614ED480B2F4E25A848808", 00:18:30.284 "uuid": "9e6cf933-2461-4ed4-80b2-f4e25a848808", 00:18:30.284 "no_auto_visible": false 00:18:30.284 } 00:18:30.284 } 00:18:30.284 }, 00:18:30.284 { 00:18:30.284 "method": "nvmf_subsystem_add_listener", 00:18:30.284 "params": { 00:18:30.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.284 "listen_address": { 00:18:30.284 "trtype": "TCP", 00:18:30.284 "adrfam": "IPv4", 00:18:30.284 "traddr": "10.0.0.2", 00:18:30.284 "trsvcid": "4420" 00:18:30.284 }, 00:18:30.284 "secure_channel": true 00:18:30.284 } 00:18:30.284 } 00:18:30.284 ] 00:18:30.284 } 00:18:30.284 ] 00:18:30.284 }' 00:18:30.284 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.284 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1150129 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1150129 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1150129 ']' 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.542 17:07:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.542 [2024-07-12 17:07:30.029776] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:30.542 [2024-07-12 17:07:30.029870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.542 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.542 [2024-07-12 17:07:30.093634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.542 [2024-07-12 17:07:30.201718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.542 [2024-07-12 17:07:30.201808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.542 [2024-07-12 17:07:30.201838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.542 [2024-07-12 17:07:30.201850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.542 [2024-07-12 17:07:30.201861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.542 [2024-07-12 17:07:30.201940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.800 [2024-07-12 17:07:30.428795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.800 [2024-07-12 17:07:30.460821] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.800 [2024-07-12 17:07:30.471924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1150279 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1150279 /var/tmp/bdevperf.sock 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1150279 ']' 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.364 17:07:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:31.364 "subsystems": [ 00:18:31.364 { 00:18:31.364 "subsystem": "keyring", 00:18:31.364 "config": [ 00:18:31.364 { 00:18:31.364 "method": "keyring_file_add_key", 00:18:31.364 "params": { 00:18:31.364 "name": "key0", 00:18:31.364 "path": "/tmp/tmp.D3xpQwu1bD" 00:18:31.364 } 00:18:31.364 } 00:18:31.364 ] 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "subsystem": "iobuf", 00:18:31.364 "config": [ 00:18:31.364 { 00:18:31.364 "method": "iobuf_set_options", 00:18:31.364 "params": { 00:18:31.364 "small_pool_count": 8192, 00:18:31.364 "large_pool_count": 1024, 00:18:31.364 "small_bufsize": 8192, 00:18:31.364 "large_bufsize": 135168 00:18:31.364 } 00:18:31.364 } 00:18:31.364 ] 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "subsystem": "sock", 00:18:31.364 "config": [ 00:18:31.364 { 00:18:31.364 "method": "sock_set_default_impl", 00:18:31.364 "params": { 00:18:31.364 "impl_name": "posix" 00:18:31.364 } 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "method": "sock_impl_set_options", 00:18:31.364 "params": { 00:18:31.364 "impl_name": "ssl", 00:18:31.364 "recv_buf_size": 4096, 00:18:31.364 "send_buf_size": 4096, 00:18:31.364 "enable_recv_pipe": true, 00:18:31.364 "enable_quickack": false, 00:18:31.364 "enable_placement_id": 0, 00:18:31.364 "enable_zerocopy_send_server": true, 00:18:31.364 "enable_zerocopy_send_client": false, 00:18:31.364 "zerocopy_threshold": 0, 00:18:31.364 "tls_version": 0, 00:18:31.364 "enable_ktls": false 00:18:31.364 } 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "method": "sock_impl_set_options", 00:18:31.364 "params": { 00:18:31.364 "impl_name": "posix", 00:18:31.364 "recv_buf_size": 2097152, 00:18:31.364 "send_buf_size": 2097152, 00:18:31.364 "enable_recv_pipe": true, 00:18:31.364 "enable_quickack": false, 00:18:31.364 "enable_placement_id": 0, 00:18:31.364 "enable_zerocopy_send_server": true, 00:18:31.364 "enable_zerocopy_send_client": false, 00:18:31.364 "zerocopy_threshold": 0, 00:18:31.364 "tls_version": 0, 00:18:31.364 "enable_ktls": false 00:18:31.364 } 00:18:31.364 } 00:18:31.364 ] 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "subsystem": "vmd", 00:18:31.364 "config": [] 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "subsystem": "accel", 00:18:31.364 "config": [ 00:18:31.364 { 00:18:31.364 "method": "accel_set_options", 00:18:31.364 "params": { 00:18:31.364 "small_cache_size": 128, 00:18:31.364 "large_cache_size": 16, 00:18:31.364 "task_count": 2048, 00:18:31.364 "sequence_count": 2048, 00:18:31.364 "buf_count": 2048 00:18:31.364 } 00:18:31.364 } 00:18:31.364 ] 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "subsystem": "bdev", 00:18:31.364 "config": [ 00:18:31.364 { 00:18:31.364 "method": "bdev_set_options", 00:18:31.364 "params": { 00:18:31.364 "bdev_io_pool_size": 65535, 00:18:31.364 "bdev_io_cache_size": 256, 00:18:31.364 "bdev_auto_examine": true, 00:18:31.364 "iobuf_small_cache_size": 128, 00:18:31.364 "iobuf_large_cache_size": 16 00:18:31.364 } 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "method": "bdev_raid_set_options", 00:18:31.364 "params": { 00:18:31.364 "process_window_size_kb": 1024 00:18:31.364 } 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "method": "bdev_iscsi_set_options", 00:18:31.364 "params": { 00:18:31.364 "timeout_sec": 30 00:18:31.364 } 00:18:31.364 }, 00:18:31.364 { 00:18:31.364 "method": "bdev_nvme_set_options", 00:18:31.364 "params": { 00:18:31.364 "action_on_timeout": "none", 00:18:31.364 "timeout_us": 0, 00:18:31.364 "timeout_admin_us": 0, 00:18:31.364 "keep_alive_timeout_ms": 10000, 00:18:31.364 "arbitration_burst": 0, 00:18:31.364 "low_priority_weight": 0, 00:18:31.364 "medium_priority_weight": 0, 00:18:31.364 "high_priority_weight": 0, 00:18:31.364 "nvme_adminq_poll_period_us": 10000, 00:18:31.364 "nvme_ioq_poll_period_us": 0, 00:18:31.364 "io_queue_requests": 512, 00:18:31.364 "delay_cmd_submit": true, 00:18:31.364 "transport_retry_count": 4, 00:18:31.364 "bdev_retry_count": 3, 00:18:31.364 "transport_ack_timeout": 0, 00:18:31.364 "ctrlr_loss_timeout_sec": 0, 00:18:31.364 "reconnect_delay_sec": 0, 00:18:31.364 "fast_io_fail_timeout_sec": 0, 00:18:31.364 "disable_auto_failback": false, 00:18:31.364 "generate_uuids": false, 00:18:31.364 "transport_tos": 0, 00:18:31.364 "nvme_error_stat": false, 00:18:31.364 "rdma_srq_size": 0, 00:18:31.364 "io_path_stat": false, 00:18:31.364 "allow_accel_sequence": false, 00:18:31.364 "rdma_max_cq_size": 0, 00:18:31.364 "rdma_cm_event_timeout_ms": 0, 00:18:31.364 "dhchap_digests": [ 00:18:31.364 "sha256", 00:18:31.364 "sha384", 00:18:31.364 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.364 a512" 00:18:31.364 ], 00:18:31.364 "dhchap_dhgroups": [ 00:18:31.365 "null", 00:18:31.365 "ffdhe2048", 00:18:31.365 "ffdhe3072", 00:18:31.365 "ffdhe4096", 00:18:31.365 "ffdhe6144", 00:18:31.365 "ffdhe8192" 00:18:31.365 ] 00:18:31.365 } 00:18:31.365 }, 00:18:31.365 { 00:18:31.365 "method": "bdev_nvme_attach_controller", 00:18:31.365 "params": { 00:18:31.365 "name": "nvme0", 00:18:31.365 "trtype": "TCP", 00:18:31.365 "adrfam": "IPv4", 00:18:31.365 "traddr": "10.0.0.2", 00:18:31.365 "trsvcid": "4420", 00:18:31.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.365 "prchk_reftag": false, 00:18:31.365 "prchk_guard": false, 00:18:31.365 "ctrlr_loss_timeout_sec": 0, 00:18:31.365 "reconnect_delay_sec": 0, 00:18:31.365 "fast_io_fail_timeout_sec": 0, 00:18:31.365 "psk": "key0", 00:18:31.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.365 "hdgst": false, 00:18:31.365 "ddgst": false 00:18:31.365 } 00:18:31.365 }, 00:18:31.365 { 00:18:31.365 "method": "bdev_nvme_set_hotplug", 00:18:31.365 "params": { 00:18:31.365 "period_us": 100000, 00:18:31.365 "enable": false 00:18:31.365 } 00:18:31.365 }, 00:18:31.365 { 00:18:31.365 "method": "bdev_enable_histogram", 00:18:31.365 "params": { 00:18:31.365 "name": "nvme0n1", 00:18:31.365 "enable": true 00:18:31.365 } 00:18:31.365 }, 00:18:31.365 { 00:18:31.365 "method": "bdev_wait_for_examine" 00:18:31.365 } 00:18:31.365 ] 00:18:31.365 }, 00:18:31.365 { 00:18:31.365 "subsystem": "nbd", 00:18:31.365 "config": [] 00:18:31.365 } 00:18:31.365 ] 00:18:31.365 }' 00:18:31.365 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.365 17:07:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.365 [2024-07-12 17:07:31.039120] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:31.365 [2024-07-12 17:07:31.039194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150279 ] 00:18:31.623 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.623 [2024-07-12 17:07:31.099431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.623 [2024-07-12 17:07:31.206402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.880 [2024-07-12 17:07:31.380095] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.456 17:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.456 17:07:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.456 17:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.456 17:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:32.713 17:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.713 17:07:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.713 Running I/O for 1 seconds... 00:18:34.084 00:18:34.084 Latency(us) 00:18:34.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.084 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:34.084 Verification LBA range: start 0x0 length 0x2000 00:18:34.084 nvme0n1 : 1.03 3542.45 13.84 0.00 0.00 35586.17 5606.97 39418.69 00:18:34.084 =================================================================================================================== 00:18:34.084 Total : 3542.45 13.84 0.00 0.00 35586.17 5606.97 39418.69 00:18:34.084 0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:34.084 nvmf_trace.0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1150279 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1150279 ']' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1150279 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1150279 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1150279' 00:18:34.084 killing process with pid 1150279 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1150279 00:18:34.084 Received shutdown signal, test time was about 1.000000 seconds 00:18:34.084 00:18:34.084 Latency(us) 00:18:34.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.084 =================================================================================================================== 00:18:34.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1150279 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.084 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.084 rmmod nvme_tcp 00:18:34.084 rmmod nvme_fabrics 00:18:34.341 rmmod nvme_keyring 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1150129 ']' 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1150129 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1150129 ']' 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1150129 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1150129 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1150129' 00:18:34.341 killing process with pid 1150129 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1150129 00:18:34.341 17:07:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1150129 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.600 17:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.505 17:07:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.505 17:07:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MNIkLBCuiZ /tmp/tmp.sgZw8IIqA0 /tmp/tmp.D3xpQwu1bD 00:18:36.505 00:18:36.505 real 1m20.106s 00:18:36.505 user 2m6.788s 00:18:36.505 sys 0m29.312s 00:18:36.505 17:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:36.505 17:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.505 ************************************ 00:18:36.505 END TEST nvmf_tls 00:18:36.505 ************************************ 00:18:36.505 17:07:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:36.505 17:07:36 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:36.505 17:07:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:36.505 17:07:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.505 17:07:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.505 ************************************ 00:18:36.505 START TEST nvmf_fips 00:18:36.505 ************************************ 00:18:36.505 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:36.764 * Looking for test storage... 00:18:36.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:36.764 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:36.765 Error setting digest 00:18:36.765 00628BDCAC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:36.765 00628BDCAC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.765 17:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.665 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:38.924 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:38.924 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:38.924 Found net devices under 0000:84:00.0: cvl_0_0 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:38.924 Found net devices under 0000:84:00.1: cvl_0_1 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:38.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:18:38.924 00:18:38.924 --- 10.0.0.2 ping statistics --- 00:18:38.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.924 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:18:38.924 00:18:38.924 --- 10.0.0.1 ping statistics --- 00:18:38.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.924 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1152653 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1152653 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1152653 ']' 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.924 17:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:38.924 [2024-07-12 17:07:38.611226] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:38.924 [2024-07-12 17:07:38.611332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.183 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.183 [2024-07-12 17:07:38.676531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.183 [2024-07-12 17:07:38.793832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.183 [2024-07-12 17:07:38.793881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.183 [2024-07-12 17:07:38.793911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.183 [2024-07-12 17:07:38.793923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.183 [2024-07-12 17:07:38.793933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.183 [2024-07-12 17:07:38.793958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:40.115 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.373 [2024-07-12 17:07:39.888814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.373 [2024-07-12 17:07:39.904819] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.373 [2024-07-12 17:07:39.905014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.374 [2024-07-12 17:07:39.936128] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.374 malloc0 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1152817 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1152817 /var/tmp/bdevperf.sock 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1152817 ']' 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.374 17:07:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.374 [2024-07-12 17:07:40.029387] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:18:40.374 [2024-07-12 17:07:40.029510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152817 ] 00:18:40.374 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.631 [2024-07-12 17:07:40.089759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.631 [2024-07-12 17:07:40.196926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.563 17:07:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.563 17:07:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:41.563 17:07:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.563 [2024-07-12 17:07:41.148381] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.563 [2024-07-12 17:07:41.148525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:41.563 TLSTESTn1 00:18:41.563 17:07:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.819 Running I/O for 10 seconds... 00:18:51.779 00:18:51.779 Latency(us) 00:18:51.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.779 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.779 Verification LBA range: start 0x0 length 0x2000 00:18:51.779 TLSTESTn1 : 10.02 3562.72 13.92 0.00 0.00 35870.23 7136.14 35729.26 00:18:51.779 =================================================================================================================== 00:18:51.779 Total : 3562.72 13.92 0.00 0.00 35870.23 7136.14 35729.26 00:18:51.779 0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:51.779 nvmf_trace.0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1152817 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1152817 ']' 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1152817 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.779 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152817 00:18:52.037 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:52.037 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:52.037 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152817' 00:18:52.037 killing process with pid 1152817 00:18:52.037 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1152817 00:18:52.037 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.037 00:18:52.037 Latency(us) 00:18:52.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.037 =================================================================================================================== 00:18:52.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.037 [2024-07-12 17:07:51.482069] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:52.037 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1152817 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:52.315 rmmod nvme_tcp 00:18:52.315 rmmod nvme_fabrics 00:18:52.315 rmmod nvme_keyring 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1152653 ']' 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1152653 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1152653 ']' 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1152653 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152653 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152653' 00:18:52.315 killing process with pid 1152653 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1152653 00:18:52.315 [2024-07-12 17:07:51.821210] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:52.315 17:07:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1152653 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.612 17:07:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.519 17:07:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:54.519 17:07:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.519 00:18:54.519 real 0m17.975s 00:18:54.519 user 0m22.729s 00:18:54.519 sys 0m6.695s 00:18:54.519 17:07:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.519 17:07:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:54.519 ************************************ 00:18:54.519 END TEST nvmf_fips 00:18:54.519 ************************************ 00:18:54.519 17:07:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:54.519 17:07:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:54.519 17:07:54 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:54.519 17:07:54 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:54.519 17:07:54 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:54.519 17:07:54 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.519 17:07:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:57.044 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.044 17:07:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:57.045 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:57.045 Found net devices under 0000:84:00.0: cvl_0_0 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:57.045 Found net devices under 0000:84:00.1: cvl_0_1 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:57.045 17:07:56 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:57.045 17:07:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:57.045 17:07:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.045 17:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.045 ************************************ 00:18:57.045 START TEST nvmf_perf_adq 00:18:57.045 ************************************ 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:57.045 * Looking for test storage... 00:18:57.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.045 17:07:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:58.940 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:58.940 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.940 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:58.941 Found net devices under 0000:84:00.0: cvl_0_0 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:58.941 Found net devices under 0000:84:00.1: cvl_0_1 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:58.941 17:07:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:59.510 17:07:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:01.417 17:08:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:06.691 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.691 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:06.692 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:06.692 Found net devices under 0000:84:00.0: cvl_0_0 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:06.692 Found net devices under 0000:84:00.1: cvl_0_1 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:19:06.692 00:19:06.692 --- 10.0.0.2 ping statistics --- 00:19:06.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.692 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:06.692 00:19:06.692 --- 10.0.0.1 ping statistics --- 00:19:06.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.692 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1158757 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1158757 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1158757 ']' 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.692 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.692 [2024-07-12 17:08:06.266177] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:06.692 [2024-07-12 17:08:06.266262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.692 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.692 [2024-07-12 17:08:06.330787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.950 [2024-07-12 17:08:06.451549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.950 [2024-07-12 17:08:06.451604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.950 [2024-07-12 17:08:06.451619] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.950 [2024-07-12 17:08:06.451631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.950 [2024-07-12 17:08:06.451641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.950 [2024-07-12 17:08:06.451760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.950 [2024-07-12 17:08:06.451795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.950 [2024-07-12 17:08:06.451861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.950 [2024-07-12 17:08:06.451864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.950 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 [2024-07-12 17:08:06.672597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 Malloc1 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.208 [2024-07-12 17:08:06.725213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1158906 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:07.208 17:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:07.208 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.106 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:09.106 17:08:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.106 17:08:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:09.106 17:08:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.106 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:09.106 "tick_rate": 2700000000, 00:19:09.106 "poll_groups": [ 00:19:09.106 { 00:19:09.107 "name": "nvmf_tgt_poll_group_000", 00:19:09.107 "admin_qpairs": 1, 00:19:09.107 "io_qpairs": 1, 00:19:09.107 "current_admin_qpairs": 1, 00:19:09.107 "current_io_qpairs": 1, 00:19:09.107 "pending_bdev_io": 0, 00:19:09.107 "completed_nvme_io": 20491, 00:19:09.107 "transports": [ 00:19:09.107 { 00:19:09.107 "trtype": "TCP" 00:19:09.107 } 00:19:09.107 ] 00:19:09.107 }, 00:19:09.107 { 00:19:09.107 "name": "nvmf_tgt_poll_group_001", 00:19:09.107 "admin_qpairs": 0, 00:19:09.107 "io_qpairs": 1, 00:19:09.107 "current_admin_qpairs": 0, 00:19:09.107 "current_io_qpairs": 1, 00:19:09.107 "pending_bdev_io": 0, 00:19:09.107 "completed_nvme_io": 20507, 00:19:09.107 "transports": [ 00:19:09.107 { 00:19:09.107 "trtype": "TCP" 00:19:09.107 } 00:19:09.107 ] 00:19:09.107 }, 00:19:09.107 { 00:19:09.107 "name": "nvmf_tgt_poll_group_002", 00:19:09.107 "admin_qpairs": 0, 00:19:09.107 "io_qpairs": 1, 00:19:09.107 "current_admin_qpairs": 0, 00:19:09.107 "current_io_qpairs": 1, 00:19:09.107 "pending_bdev_io": 0, 00:19:09.107 "completed_nvme_io": 21060, 00:19:09.107 "transports": [ 00:19:09.107 { 00:19:09.107 "trtype": "TCP" 00:19:09.107 } 00:19:09.107 ] 00:19:09.107 }, 00:19:09.107 { 00:19:09.107 "name": "nvmf_tgt_poll_group_003", 00:19:09.107 "admin_qpairs": 0, 00:19:09.107 "io_qpairs": 1, 00:19:09.107 "current_admin_qpairs": 0, 00:19:09.107 "current_io_qpairs": 1, 00:19:09.107 "pending_bdev_io": 0, 00:19:09.107 "completed_nvme_io": 19983, 00:19:09.107 "transports": [ 00:19:09.107 { 00:19:09.107 "trtype": "TCP" 00:19:09.107 } 00:19:09.107 ] 00:19:09.107 } 00:19:09.107 ] 00:19:09.107 }' 00:19:09.107 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:09.107 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:09.107 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:09.107 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:09.107 17:08:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1158906 00:19:17.213 Initializing NVMe Controllers 00:19:17.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:17.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:17.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:17.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:17.213 Initialization complete. Launching workers. 00:19:17.213 ======================================================== 00:19:17.213 Latency(us) 00:19:17.213 Device Information : IOPS MiB/s Average min max 00:19:17.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10536.19 41.16 6074.58 2256.57 10345.99 00:19:17.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10764.39 42.05 5947.01 1558.93 9980.82 00:19:17.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10996.29 42.95 5822.02 2586.53 9421.72 00:19:17.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10689.39 41.76 5988.33 2531.73 9946.90 00:19:17.213 ======================================================== 00:19:17.213 Total : 42986.28 167.92 5956.58 1558.93 10345.99 00:19:17.213 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.213 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.213 rmmod nvme_tcp 00:19:17.213 rmmod nvme_fabrics 00:19:17.470 rmmod nvme_keyring 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1158757 ']' 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1158757 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1158757 ']' 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1158757 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1158757 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1158757' 00:19:17.470 killing process with pid 1158757 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1158757 00:19:17.470 17:08:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1158757 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.728 17:08:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.629 17:08:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.629 17:08:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:19.629 17:08:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:20.561 17:08:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:22.465 17:08:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:27.727 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:27.727 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:27.727 Found net devices under 0000:84:00.0: cvl_0_0 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:27.727 Found net devices under 0000:84:00.1: cvl_0_1 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.727 17:08:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:19:27.727 00:19:27.727 --- 10.0.0.2 ping statistics --- 00:19:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.727 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:19:27.727 00:19:27.727 --- 10.0.0.1 ping statistics --- 00:19:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.727 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:27.727 net.core.busy_poll = 1 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:27.727 net.core.busy_read = 1 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1161518 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1161518 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1161518 ']' 00:19:27.727 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.728 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.728 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.728 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.728 17:08:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:27.728 [2024-07-12 17:08:27.266389] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:27.728 [2024-07-12 17:08:27.266472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.728 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.728 [2024-07-12 17:08:27.339414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.985 [2024-07-12 17:08:27.452446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.985 [2024-07-12 17:08:27.452506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.985 [2024-07-12 17:08:27.452519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.985 [2024-07-12 17:08:27.452531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.985 [2024-07-12 17:08:27.452541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.985 [2024-07-12 17:08:27.452630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.985 [2024-07-12 17:08:27.452654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.985 [2024-07-12 17:08:27.452709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.985 [2024-07-12 17:08:27.452712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.549 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:28.806 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.807 [2024-07-12 17:08:28.391806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.807 Malloc1 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.807 [2024-07-12 17:08:28.445324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1161684 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:28.807 17:08:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:28.807 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.773 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:30.773 17:08:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.773 17:08:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:31.031 "tick_rate": 2700000000, 00:19:31.031 "poll_groups": [ 00:19:31.031 { 00:19:31.031 "name": "nvmf_tgt_poll_group_000", 00:19:31.031 "admin_qpairs": 1, 00:19:31.031 "io_qpairs": 2, 00:19:31.031 "current_admin_qpairs": 1, 00:19:31.031 "current_io_qpairs": 2, 00:19:31.031 "pending_bdev_io": 0, 00:19:31.031 "completed_nvme_io": 25973, 00:19:31.031 "transports": [ 00:19:31.031 { 00:19:31.031 "trtype": "TCP" 00:19:31.031 } 00:19:31.031 ] 00:19:31.031 }, 00:19:31.031 { 00:19:31.031 "name": "nvmf_tgt_poll_group_001", 00:19:31.031 "admin_qpairs": 0, 00:19:31.031 "io_qpairs": 2, 00:19:31.031 "current_admin_qpairs": 0, 00:19:31.031 "current_io_qpairs": 2, 00:19:31.031 "pending_bdev_io": 0, 00:19:31.031 "completed_nvme_io": 25865, 00:19:31.031 "transports": [ 00:19:31.031 { 00:19:31.031 "trtype": "TCP" 00:19:31.031 } 00:19:31.031 ] 00:19:31.031 }, 00:19:31.031 { 00:19:31.031 "name": "nvmf_tgt_poll_group_002", 00:19:31.031 "admin_qpairs": 0, 00:19:31.031 "io_qpairs": 0, 00:19:31.031 "current_admin_qpairs": 0, 00:19:31.031 "current_io_qpairs": 0, 00:19:31.031 "pending_bdev_io": 0, 00:19:31.031 "completed_nvme_io": 0, 00:19:31.031 "transports": [ 00:19:31.031 { 00:19:31.031 "trtype": "TCP" 00:19:31.031 } 00:19:31.031 ] 00:19:31.031 }, 00:19:31.031 { 00:19:31.031 "name": "nvmf_tgt_poll_group_003", 00:19:31.031 "admin_qpairs": 0, 00:19:31.031 "io_qpairs": 0, 00:19:31.031 "current_admin_qpairs": 0, 00:19:31.031 "current_io_qpairs": 0, 00:19:31.031 "pending_bdev_io": 0, 00:19:31.031 "completed_nvme_io": 0, 00:19:31.031 "transports": [ 00:19:31.031 { 00:19:31.031 "trtype": "TCP" 00:19:31.031 } 00:19:31.031 ] 00:19:31.031 } 00:19:31.031 ] 00:19:31.031 }' 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:31.031 17:08:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1161684 00:19:39.133 Initializing NVMe Controllers 00:19:39.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:39.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:39.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:39.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:39.133 Initialization complete. Launching workers. 00:19:39.133 ======================================================== 00:19:39.133 Latency(us) 00:19:39.133 Device Information : IOPS MiB/s Average min max 00:19:39.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5421.20 21.18 11821.57 1742.43 56532.67 00:19:39.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7035.90 27.48 9097.48 1772.41 53340.36 00:19:39.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6572.50 25.67 9739.46 1621.31 56288.50 00:19:39.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8046.60 31.43 7954.16 1980.14 54274.05 00:19:39.133 ======================================================== 00:19:39.133 Total : 27076.20 105.77 9458.96 1621.31 56532.67 00:19:39.133 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.133 rmmod nvme_tcp 00:19:39.133 rmmod nvme_fabrics 00:19:39.133 rmmod nvme_keyring 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1161518 ']' 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1161518 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1161518 ']' 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1161518 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:39.133 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1161518 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1161518' 00:19:39.134 killing process with pid 1161518 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1161518 00:19:39.134 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1161518 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.393 17:08:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.680 17:08:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.680 17:08:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:42.680 00:19:42.680 real 0m45.764s 00:19:42.680 user 2m43.064s 00:19:42.680 sys 0m9.811s 00:19:42.680 17:08:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.680 17:08:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.680 ************************************ 00:19:42.680 END TEST nvmf_perf_adq 00:19:42.680 ************************************ 00:19:42.680 17:08:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:42.680 17:08:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:42.680 17:08:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:42.680 17:08:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.680 17:08:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:42.680 ************************************ 00:19:42.680 START TEST nvmf_shutdown 00:19:42.680 ************************************ 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:42.680 * Looking for test storage... 00:19:42.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.680 17:08:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:42.681 ************************************ 00:19:42.681 START TEST nvmf_shutdown_tc1 00:19:42.681 ************************************ 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.681 17:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:44.579 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:44.580 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:44.580 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:44.580 Found net devices under 0000:84:00.0: cvl_0_0 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:44.580 Found net devices under 0000:84:00.1: cvl_0_1 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.580 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:19:44.838 00:19:44.838 --- 10.0.0.2 ping statistics --- 00:19:44.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.838 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:19:44.838 00:19:44.838 --- 10.0.0.1 ping statistics --- 00:19:44.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.838 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1164987 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1164987 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1164987 ']' 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.838 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:44.838 [2024-07-12 17:08:44.467263] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:44.838 [2024-07-12 17:08:44.467356] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.838 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.095 [2024-07-12 17:08:44.533781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.095 [2024-07-12 17:08:44.636065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.095 [2024-07-12 17:08:44.636118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.095 [2024-07-12 17:08:44.636147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.095 [2024-07-12 17:08:44.636159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.095 [2024-07-12 17:08:44.636169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.096 [2024-07-12 17:08:44.636250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.096 [2024-07-12 17:08:44.636321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.096 [2024-07-12 17:08:44.636401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.096 [2024-07-12 17:08:44.636403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.096 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.096 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:45.096 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.096 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.096 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.353 [2024-07-12 17:08:44.802695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.353 17:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.353 Malloc1 00:19:45.353 [2024-07-12 17:08:44.885631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.353 Malloc2 00:19:45.353 Malloc3 00:19:45.353 Malloc4 00:19:45.610 Malloc5 00:19:45.610 Malloc6 00:19:45.610 Malloc7 00:19:45.610 Malloc8 00:19:45.610 Malloc9 00:19:45.610 Malloc10 00:19:45.868 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.868 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1165142 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1165142 /var/tmp/bdevperf.sock 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1165142 ']' 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.869 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.869 { 00:19:45.869 "params": { 00:19:45.869 "name": "Nvme$subsystem", 00:19:45.869 "trtype": "$TEST_TRANSPORT", 00:19:45.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.869 "adrfam": "ipv4", 00:19:45.869 "trsvcid": "$NVMF_PORT", 00:19:45.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.869 "hdgst": ${hdgst:-false}, 00:19:45.869 "ddgst": ${ddgst:-false} 00:19:45.869 }, 00:19:45.869 "method": "bdev_nvme_attach_controller" 00:19:45.869 } 00:19:45.869 EOF 00:19:45.869 )") 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:45.870 { 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme$subsystem", 00:19:45.870 "trtype": "$TEST_TRANSPORT", 00:19:45.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "$NVMF_PORT", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.870 "hdgst": ${hdgst:-false}, 00:19:45.870 "ddgst": ${ddgst:-false} 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 } 00:19:45.870 EOF 00:19:45.870 )") 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:45.870 17:08:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme1", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme2", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme3", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme4", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme5", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme6", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme7", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme8", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme9", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 },{ 00:19:45.870 "params": { 00:19:45.870 "name": "Nvme10", 00:19:45.870 "trtype": "tcp", 00:19:45.870 "traddr": "10.0.0.2", 00:19:45.870 "adrfam": "ipv4", 00:19:45.870 "trsvcid": "4420", 00:19:45.870 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:45.870 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:45.870 "hdgst": false, 00:19:45.870 "ddgst": false 00:19:45.870 }, 00:19:45.870 "method": "bdev_nvme_attach_controller" 00:19:45.870 }' 00:19:45.870 [2024-07-12 17:08:45.373430] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:45.870 [2024-07-12 17:08:45.373516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:45.870 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.870 [2024-07-12 17:08:45.438158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.870 [2024-07-12 17:08:45.549335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1165142 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:47.765 17:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:48.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1165142 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1164987 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.696 "trtype": "$TEST_TRANSPORT", 00:19:48.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.696 "adrfam": "ipv4", 00:19:48.696 "trsvcid": "$NVMF_PORT", 00:19:48.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.696 "hdgst": ${hdgst:-false}, 00:19:48.696 "ddgst": ${ddgst:-false} 00:19:48.696 }, 00:19:48.696 "method": "bdev_nvme_attach_controller" 00:19:48.696 } 00:19:48.696 EOF 00:19:48.696 )") 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.696 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.696 { 00:19:48.696 "params": { 00:19:48.696 "name": "Nvme$subsystem", 00:19:48.697 "trtype": "$TEST_TRANSPORT", 00:19:48.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.697 "adrfam": "ipv4", 00:19:48.697 "trsvcid": "$NVMF_PORT", 00:19:48.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.697 "hdgst": ${hdgst:-false}, 00:19:48.697 "ddgst": ${ddgst:-false} 00:19:48.697 }, 00:19:48.697 "method": "bdev_nvme_attach_controller" 00:19:48.697 } 00:19:48.697 EOF 00:19:48.697 )") 00:19:48.697 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.697 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.697 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.697 { 00:19:48.697 "params": { 00:19:48.697 "name": "Nvme$subsystem", 00:19:48.697 "trtype": "$TEST_TRANSPORT", 00:19:48.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.697 "adrfam": "ipv4", 00:19:48.697 "trsvcid": "$NVMF_PORT", 00:19:48.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.697 "hdgst": ${hdgst:-false}, 00:19:48.697 "ddgst": ${ddgst:-false} 00:19:48.697 }, 00:19:48.697 "method": "bdev_nvme_attach_controller" 00:19:48.697 } 00:19:48.697 EOF 00:19:48.697 )") 00:19:48.697 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.954 { 00:19:48.954 "params": { 00:19:48.954 "name": "Nvme$subsystem", 00:19:48.954 "trtype": "$TEST_TRANSPORT", 00:19:48.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.954 "adrfam": "ipv4", 00:19:48.954 "trsvcid": "$NVMF_PORT", 00:19:48.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.954 "hdgst": ${hdgst:-false}, 00:19:48.954 "ddgst": ${ddgst:-false} 00:19:48.954 }, 00:19:48.954 "method": "bdev_nvme_attach_controller" 00:19:48.954 } 00:19:48.954 EOF 00:19:48.954 )") 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:48.954 17:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.954 "params": { 00:19:48.954 "name": "Nvme1", 00:19:48.954 "trtype": "tcp", 00:19:48.954 "traddr": "10.0.0.2", 00:19:48.954 "adrfam": "ipv4", 00:19:48.954 "trsvcid": "4420", 00:19:48.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.954 "hdgst": false, 00:19:48.954 "ddgst": false 00:19:48.954 }, 00:19:48.954 "method": "bdev_nvme_attach_controller" 00:19:48.954 },{ 00:19:48.954 "params": { 00:19:48.954 "name": "Nvme2", 00:19:48.954 "trtype": "tcp", 00:19:48.954 "traddr": "10.0.0.2", 00:19:48.954 "adrfam": "ipv4", 00:19:48.954 "trsvcid": "4420", 00:19:48.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:48.954 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.954 "hdgst": false, 00:19:48.954 "ddgst": false 00:19:48.954 }, 00:19:48.954 "method": "bdev_nvme_attach_controller" 00:19:48.954 },{ 00:19:48.954 "params": { 00:19:48.954 "name": "Nvme3", 00:19:48.954 "trtype": "tcp", 00:19:48.954 "traddr": "10.0.0.2", 00:19:48.954 "adrfam": "ipv4", 00:19:48.954 "trsvcid": "4420", 00:19:48.954 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:48.954 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:48.954 "hdgst": false, 00:19:48.954 "ddgst": false 00:19:48.954 }, 00:19:48.954 "method": "bdev_nvme_attach_controller" 00:19:48.954 },{ 00:19:48.954 "params": { 00:19:48.954 "name": "Nvme4", 00:19:48.954 "trtype": "tcp", 00:19:48.954 "traddr": "10.0.0.2", 00:19:48.954 "adrfam": "ipv4", 00:19:48.954 "trsvcid": "4420", 00:19:48.954 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:48.954 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:48.954 "hdgst": false, 00:19:48.954 "ddgst": false 00:19:48.954 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme5", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme6", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme7", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme8", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme9", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 },{ 00:19:48.955 "params": { 00:19:48.955 "name": "Nvme10", 00:19:48.955 "trtype": "tcp", 00:19:48.955 "traddr": "10.0.0.2", 00:19:48.955 "adrfam": "ipv4", 00:19:48.955 "trsvcid": "4420", 00:19:48.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:48.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:48.955 "hdgst": false, 00:19:48.955 "ddgst": false 00:19:48.955 }, 00:19:48.955 "method": "bdev_nvme_attach_controller" 00:19:48.955 }' 00:19:48.955 [2024-07-12 17:08:48.402140] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:48.955 [2024-07-12 17:08:48.402231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165471 ] 00:19:48.955 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.955 [2024-07-12 17:08:48.467921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.955 [2024-07-12 17:08:48.578706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.854 Running I/O for 1 seconds... 00:19:51.793 00:19:51.793 Latency(us) 00:19:51.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.793 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme1n1 : 1.12 229.45 14.34 0.00 0.00 276028.49 18544.26 265639.25 00:19:51.793 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme2n1 : 1.10 232.16 14.51 0.00 0.00 267903.81 33010.73 233016.89 00:19:51.793 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme3n1 : 1.10 233.33 14.58 0.00 0.00 261637.12 17864.63 259425.47 00:19:51.793 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme4n1 : 1.11 230.94 14.43 0.00 0.00 259767.94 27573.67 248551.35 00:19:51.793 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme5n1 : 1.15 223.47 13.97 0.00 0.00 264755.96 21942.42 262532.36 00:19:51.793 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme6n1 : 1.19 214.78 13.42 0.00 0.00 272013.84 20777.34 293601.28 00:19:51.793 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme7n1 : 1.16 221.49 13.84 0.00 0.00 258518.28 19612.25 265639.25 00:19:51.793 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme8n1 : 1.20 267.07 16.69 0.00 0.00 211717.84 13883.92 259425.47 00:19:51.793 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme9n1 : 1.19 216.01 13.50 0.00 0.00 256965.21 20194.80 270299.59 00:19:51.793 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.793 Verification LBA range: start 0x0 length 0x400 00:19:51.793 Nvme10n1 : 1.21 265.27 16.58 0.00 0.00 206209.93 9126.49 268746.15 00:19:51.793 =================================================================================================================== 00:19:51.793 Total : 2333.96 145.87 0.00 0.00 251428.61 9126.49 293601.28 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.051 rmmod nvme_tcp 00:19:52.051 rmmod nvme_fabrics 00:19:52.051 rmmod nvme_keyring 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1164987 ']' 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1164987 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1164987 ']' 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1164987 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1164987 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1164987' 00:19:52.051 killing process with pid 1164987 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1164987 00:19:52.051 17:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1164987 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.616 17:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.151 00:19:55.151 real 0m12.095s 00:19:55.151 user 0m35.133s 00:19:55.151 sys 0m3.369s 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:55.151 ************************************ 00:19:55.151 END TEST nvmf_shutdown_tc1 00:19:55.151 ************************************ 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:55.151 ************************************ 00:19:55.151 START TEST nvmf_shutdown_tc2 00:19:55.151 ************************************ 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.151 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:55.152 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:55.152 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:55.152 Found net devices under 0000:84:00.0: cvl_0_0 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:55.152 Found net devices under 0000:84:00.1: cvl_0_1 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:19:55.152 00:19:55.152 --- 10.0.0.2 ping statistics --- 00:19:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.152 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:19:55.152 00:19:55.152 --- 10.0.0.1 ping statistics --- 00:19:55.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.152 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1166352 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1166352 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1166352 ']' 00:19:55.152 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.153 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.153 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.153 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.153 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.153 [2024-07-12 17:08:54.548827] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:55.153 [2024-07-12 17:08:54.548924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.153 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.153 [2024-07-12 17:08:54.613431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.153 [2024-07-12 17:08:54.716013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.153 [2024-07-12 17:08:54.716075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.153 [2024-07-12 17:08:54.716097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.153 [2024-07-12 17:08:54.716122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.153 [2024-07-12 17:08:54.716131] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.153 [2024-07-12 17:08:54.716215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.153 [2024-07-12 17:08:54.716275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.153 [2024-07-12 17:08:54.716341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.153 [2024-07-12 17:08:54.716343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.412 [2024-07-12 17:08:54.875690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.412 17:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.412 Malloc1 00:19:55.412 [2024-07-12 17:08:54.965047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.412 Malloc2 00:19:55.412 Malloc3 00:19:55.412 Malloc4 00:19:55.670 Malloc5 00:19:55.670 Malloc6 00:19:55.670 Malloc7 00:19:55.670 Malloc8 00:19:55.670 Malloc9 00:19:55.928 Malloc10 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1166420 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1166420 /var/tmp/bdevperf.sock 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1166420 ']' 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.928 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:55.929 { 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme$subsystem", 00:19:55.929 "trtype": "$TEST_TRANSPORT", 00:19:55.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "$NVMF_PORT", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.929 "hdgst": ${hdgst:-false}, 00:19:55.929 "ddgst": ${ddgst:-false} 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 } 00:19:55.929 EOF 00:19:55.929 )") 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:55.929 17:08:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme1", 00:19:55.929 "trtype": "tcp", 00:19:55.929 "traddr": "10.0.0.2", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "4420", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.929 "hdgst": false, 00:19:55.929 "ddgst": false 00:19:55.929 }, 00:19:55.929 "method": "bdev_nvme_attach_controller" 00:19:55.929 },{ 00:19:55.929 "params": { 00:19:55.929 "name": "Nvme2", 00:19:55.929 "trtype": "tcp", 00:19:55.929 "traddr": "10.0.0.2", 00:19:55.929 "adrfam": "ipv4", 00:19:55.929 "trsvcid": "4420", 00:19:55.929 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:55.929 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme3", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme4", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme5", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme6", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme7", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme8", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme9", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 },{ 00:19:55.930 "params": { 00:19:55.930 "name": "Nvme10", 00:19:55.930 "trtype": "tcp", 00:19:55.930 "traddr": "10.0.0.2", 00:19:55.930 "adrfam": "ipv4", 00:19:55.930 "trsvcid": "4420", 00:19:55.930 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:55.930 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:55.930 "hdgst": false, 00:19:55.930 "ddgst": false 00:19:55.930 }, 00:19:55.930 "method": "bdev_nvme_attach_controller" 00:19:55.930 }' 00:19:55.930 [2024-07-12 17:08:55.467821] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:19:55.930 [2024-07-12 17:08:55.467896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166420 ] 00:19:55.930 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.930 [2024-07-12 17:08:55.533769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.189 [2024-07-12 17:08:55.646687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.565 Running I/O for 10 seconds... 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:57.823 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:58.081 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.340 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=73 00:19:58.341 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:19:58.341 17:08:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=142 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 142 -ge 100 ']' 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:58.600 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1166420 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1166420 ']' 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1166420 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166420 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166420' 00:19:58.601 killing process with pid 1166420 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1166420 00:19:58.601 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1166420 00:19:58.601 Received shutdown signal, test time was about 0.953127 seconds 00:19:58.601 00:19:58.601 Latency(us) 00:19:58.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme1n1 : 0.95 270.84 16.93 0.00 0.00 231565.65 18932.62 253211.69 00:19:58.601 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme2n1 : 0.93 206.59 12.91 0.00 0.00 300093.95 20971.52 265639.25 00:19:58.601 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme3n1 : 0.95 268.83 16.80 0.00 0.00 225936.50 18155.90 278066.82 00:19:58.601 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme4n1 : 0.95 269.68 16.85 0.00 0.00 220709.55 31068.92 257872.02 00:19:58.601 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme5n1 : 0.91 210.71 13.17 0.00 0.00 275186.98 31263.10 254765.13 00:19:58.601 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme6n1 : 0.90 213.50 13.34 0.00 0.00 265517.64 21165.70 257872.02 00:19:58.601 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme7n1 : 0.92 208.70 13.04 0.00 0.00 265930.27 18155.90 265639.25 00:19:58.601 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme8n1 : 0.91 210.00 13.13 0.00 0.00 258177.71 20194.80 243891.01 00:19:58.601 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme9n1 : 0.93 205.37 12.84 0.00 0.00 259377.68 21068.61 264085.81 00:19:58.601 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:58.601 Verification LBA range: start 0x0 length 0x400 00:19:58.601 Nvme10n1 : 0.94 204.03 12.75 0.00 0.00 255594.95 19029.71 288940.94 00:19:58.601 =================================================================================================================== 00:19:58.601 Total : 2268.25 141.77 0.00 0.00 253105.59 18155.90 288940.94 00:19:58.861 17:08:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.239 rmmod nvme_tcp 00:20:00.239 rmmod nvme_fabrics 00:20:00.239 rmmod nvme_keyring 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1166352 ']' 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1166352 ']' 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166352' 00:20:00.239 killing process with pid 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1166352 00:20:00.239 17:08:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1166352 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.499 17:09:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.481 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.481 00:20:02.481 real 0m7.855s 00:20:02.481 user 0m24.061s 00:20:02.481 sys 0m1.461s 00:20:02.481 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.481 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.481 ************************************ 00:20:02.481 END TEST nvmf_shutdown_tc2 00:20:02.481 ************************************ 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.740 ************************************ 00:20:02.740 START TEST nvmf_shutdown_tc3 00:20:02.740 ************************************ 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:02.740 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:02.740 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:02.740 Found net devices under 0000:84:00.0: cvl_0_0 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.740 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:02.741 Found net devices under 0000:84:00.1: cvl_0_1 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:20:02.741 00:20:02.741 --- 10.0.0.2 ping statistics --- 00:20:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.741 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:02.741 00:20:02.741 --- 10.0.0.1 ping statistics --- 00:20:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.741 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1167561 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1167561 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1167561 ']' 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.741 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.999 [2024-07-12 17:09:02.460566] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:02.999 [2024-07-12 17:09:02.460634] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.999 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.999 [2024-07-12 17:09:02.522317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.999 [2024-07-12 17:09:02.626809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.999 [2024-07-12 17:09:02.626866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.999 [2024-07-12 17:09:02.626886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.999 [2024-07-12 17:09:02.626897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.999 [2024-07-12 17:09:02.626906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.999 [2024-07-12 17:09:02.626990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.999 [2024-07-12 17:09:02.627038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.999 [2024-07-12 17:09:02.627096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.999 [2024-07-12 17:09:02.627098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.265 [2024-07-12 17:09:02.770346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.265 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.266 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.267 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.267 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.267 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:03.267 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.267 17:09:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.267 Malloc1 00:20:03.267 [2024-07-12 17:09:02.859489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.267 Malloc2 00:20:03.267 Malloc3 00:20:03.535 Malloc4 00:20:03.535 Malloc5 00:20:03.535 Malloc6 00:20:03.535 Malloc7 00:20:03.535 Malloc8 00:20:03.793 Malloc9 00:20:03.793 Malloc10 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1167620 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1167620 /var/tmp/bdevperf.sock 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1167620 ']' 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:03.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.793 { 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme$subsystem", 00:20:03.793 "trtype": "$TEST_TRANSPORT", 00:20:03.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "$NVMF_PORT", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.793 "hdgst": ${hdgst:-false}, 00:20:03.793 "ddgst": ${ddgst:-false} 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 } 00:20:03.793 EOF 00:20:03.793 )") 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.793 17:09:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme1", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme2", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme3", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme4", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme5", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme6", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme7", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.793 "hdgst": false, 00:20:03.793 "ddgst": false 00:20:03.793 }, 00:20:03.793 "method": "bdev_nvme_attach_controller" 00:20:03.793 },{ 00:20:03.793 "params": { 00:20:03.793 "name": "Nvme8", 00:20:03.793 "trtype": "tcp", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "adrfam": "ipv4", 00:20:03.793 "trsvcid": "4420", 00:20:03.793 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.793 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.794 "hdgst": false, 00:20:03.794 "ddgst": false 00:20:03.794 }, 00:20:03.794 "method": "bdev_nvme_attach_controller" 00:20:03.794 },{ 00:20:03.794 "params": { 00:20:03.794 "name": "Nvme9", 00:20:03.794 "trtype": "tcp", 00:20:03.794 "traddr": "10.0.0.2", 00:20:03.794 "adrfam": "ipv4", 00:20:03.794 "trsvcid": "4420", 00:20:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.794 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.794 "hdgst": false, 00:20:03.794 "ddgst": false 00:20:03.794 }, 00:20:03.794 "method": "bdev_nvme_attach_controller" 00:20:03.794 },{ 00:20:03.794 "params": { 00:20:03.794 "name": "Nvme10", 00:20:03.794 "trtype": "tcp", 00:20:03.794 "traddr": "10.0.0.2", 00:20:03.794 "adrfam": "ipv4", 00:20:03.794 "trsvcid": "4420", 00:20:03.794 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.794 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.794 "hdgst": false, 00:20:03.794 "ddgst": false 00:20:03.794 }, 00:20:03.794 "method": "bdev_nvme_attach_controller" 00:20:03.794 }' 00:20:03.794 [2024-07-12 17:09:03.380223] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:03.794 [2024-07-12 17:09:03.380301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167620 ] 00:20:03.794 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.794 [2024-07-12 17:09:03.448365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.051 [2024-07-12 17:09:03.561257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.952 Running I/O for 10 seconds... 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:05.952 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1167561 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1167561 ']' 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1167561 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1167561 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1167561' 00:20:06.226 killing process with pid 1167561 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1167561 00:20:06.226 17:09:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1167561 00:20:06.226 [2024-07-12 17:09:05.779013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.226 [2024-07-12 17:09:05.779201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.779924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfa80 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.227 [2024-07-12 17:09:05.781782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.781987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.782207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2480 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.784693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c03c0 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.786994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.228 [2024-07-12 17:09:05.787161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.787174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.787191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.787204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.787216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0d20 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.788998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.789274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c11c0 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.229 [2024-07-12 17:09:05.790877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.790989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.791416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1660 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 [2024-07-12 17:09:05.792603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.792628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.230 the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.792665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:12the state(5) to be set 00:20:06.230 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 [2024-07-12 17:09:05.792682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.230 [2024-07-12 17:09:05.792695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 [2024-07-12 17:09:05.792708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.230 [2024-07-12 17:09:05.792720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:1[2024-07-12 17:09:05.792733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.792774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.230 the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 [2024-07-12 17:09:05.792804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.230 [2024-07-12 17:09:05.792817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.230 [2024-07-12 17:09:05.792830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.230 [2024-07-12 17:09:05.792842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.792843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.792869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.792882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.792908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.792920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.792934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.792946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.792960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:1the state(5) to be set 00:20:06.231 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.792976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.792977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:06.231 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.792990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.792994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.793063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:06.231 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:1[2024-07-12 17:09:05.793177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.793192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:06.231 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with [2024-07-12 17:09:05.793270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1the state(5) to be set 00:20:06.231 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:1[2024-07-12 17:09:05.793335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.793349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1b20 is same with the state(5) to be set 00:20:06.231 [2024-07-12 17:09:05.793408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.231 [2024-07-12 17:09:05.793481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.231 [2024-07-12 17:09:05.793494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.793974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.793989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1the state(5) to be set 00:20:06.232 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1the state(5) to be set 00:20:06.232 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.794287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1the state(5) to be set 00:20:06.232 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:06.232 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.232 [2024-07-12 17:09:05.794473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.232 [2024-07-12 17:09:05.794486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.232 [2024-07-12 17:09:05.794494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128the state(5) to be set 00:20:06.233 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with [2024-07-12 17:09:05.794526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:06.233 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-12 17:09:05.794602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:09:05.794617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.794697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.794710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:06.233 [2024-07-12 17:09:05.794792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794864] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140bd40 was disconnected and fr[2024-07-12 17:09:05.794867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with eed. reset controller. 00:20:06.233 the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.794999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.795011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.795023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1fc0 is same with the state(5) to be set 00:20:06.233 [2024-07-12 17:09:05.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.233 [2024-07-12 17:09:05.795706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.233 [2024-07-12 17:09:05.795744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.795982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.795998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.796975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.796991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.234 [2024-07-12 17:09:05.797005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.234 [2024-07-12 17:09:05.797035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.235 [2024-07-12 17:09:05.797223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:06.235 [2024-07-12 17:09:05.797326] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1388f90 was disconnected and freed. reset controller. 00:20:06.235 [2024-07-12 17:09:05.797874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.797899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.797931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.797959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.797973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.797987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558b00 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd850 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5eb0 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf87200 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1449690 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565980 is same with the state(5) to be set 00:20:06.235 [2024-07-12 17:09:05.798908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.235 [2024-07-12 17:09:05.798943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.235 [2024-07-12 17:09:05.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.798971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.798985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.798999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdc90 is same with the state(5) to be set 00:20:06.236 [2024-07-12 17:09:05.799080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0880 is same with the state(5) to be set 00:20:06.236 [2024-07-12 17:09:05.799251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b610 is same with the state(5) to be set 00:20:06.236 [2024-07-12 17:09:05.799423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.236 [2024-07-12 17:09:05.799531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558920 is same with the state(5) to be set 00:20:06.236 [2024-07-12 17:09:05.799691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.799970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.799984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.236 [2024-07-12 17:09:05.800508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.236 [2024-07-12 17:09:05.800522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.800538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.800551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.800567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.800581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.800597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.800610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.800626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.806976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.806991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.237 [2024-07-12 17:09:05.807830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.237 [2024-07-12 17:09:05.807968] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140a8b0 was disconnected and freed. reset controller. 00:20:06.237 [2024-07-12 17:09:05.810893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:06.237 [2024-07-12 17:09:05.810937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:06.237 [2024-07-12 17:09:05.810971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdc90 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.810995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5eb0 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.811053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558b00 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.811082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dd850 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.811107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf87200 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.811135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1449690 (9): Bad file descriptor 00:20:06.237 [2024-07-12 17:09:05.811159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565980 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.811192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0880 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.811221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9b610 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.811255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558920 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.812945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:06.238 [2024-07-12 17:09:05.814179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.238 [2024-07-12 17:09:05.814213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5eb0 with addr=10.0.0.2, port=4420 00:20:06.238 [2024-07-12 17:09:05.814232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5eb0 is same with the state(5) to be set 00:20:06.238 [2024-07-12 17:09:05.814349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.238 [2024-07-12 17:09:05.814376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bdc90 with addr=10.0.0.2, port=4420 00:20:06.238 [2024-07-12 17:09:05.814392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdc90 is same with the state(5) to be set 00:20:06.238 [2024-07-12 17:09:05.814508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.238 [2024-07-12 17:09:05.814543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1565980 with addr=10.0.0.2, port=4420 00:20:06.238 [2024-07-12 17:09:05.814559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565980 is same with the state(5) to be set 00:20:06.238 [2024-07-12 17:09:05.815012] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815094] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815162] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815234] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815306] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815379] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5eb0 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.815436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdc90 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.815456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565980 (9): Bad file descriptor 00:20:06.238 [2024-07-12 17:09:05.815538] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.238 [2024-07-12 17:09:05.815678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:06.238 [2024-07-12 17:09:05.815701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:06.238 [2024-07-12 17:09:05.815720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:06.238 [2024-07-12 17:09:05.815752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:06.238 [2024-07-12 17:09:05.815769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:06.238 [2024-07-12 17:09:05.815783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:06.238 [2024-07-12 17:09:05.815801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.238 [2024-07-12 17:09:05.815815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:06.238 [2024-07-12 17:09:05.815828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.238 [2024-07-12 17:09:05.815908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.238 [2024-07-12 17:09:05.815930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.238 [2024-07-12 17:09:05.815942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.238 [2024-07-12 17:09:05.821116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.821984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.821999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.822014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.238 [2024-07-12 17:09:05.822034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.238 [2024-07-12 17:09:05.822048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.822971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.239 [2024-07-12 17:09:05.823177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.239 [2024-07-12 17:09:05.823192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409610 is same with the state(5) to be set 00:20:06.240 [2024-07-12 17:09:05.824477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.824969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.824985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.240 [2024-07-12 17:09:05.825826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.240 [2024-07-12 17:09:05.825842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.825857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.825873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.825902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.825917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.825932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.825947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.825963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.825993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.826347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.826362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396320 is same with the state(5) to be set 00:20:06.241 [2024-07-12 17:09:05.827606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.827977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.827991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.241 [2024-07-12 17:09:05.828400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.241 [2024-07-12 17:09:05.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.828971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.828985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.829612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.829627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152eb40 is same with the state(5) to be set 00:20:06.242 [2024-07-12 17:09:05.830884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.830907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.830930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.830946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.830962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.242 [2024-07-12 17:09:05.830998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.242 [2024-07-12 17:09:05.831013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.831974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.831990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.832004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.832019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.832043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.832058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.832071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.832087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.832101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.832117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.243 [2024-07-12 17:09:05.832130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.243 [2024-07-12 17:09:05.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.832893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.832907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153be40 is same with the state(5) to be set 00:20:06.244 [2024-07-12 17:09:05.834172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.244 [2024-07-12 17:09:05.834772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.244 [2024-07-12 17:09:05.834786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.834970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.834987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.835977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.835993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.836034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.836048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.836064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.836078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.836095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.836109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.836126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.245 [2024-07-12 17:09:05.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.245 [2024-07-12 17:09:05.836157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.836171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.836187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.836202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.836218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.836232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.836251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153d290 is same with the state(5) to be set 00:20:06.246 [2024-07-12 17:09:05.837512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.837982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.246 [2024-07-12 17:09:05.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.246 [2024-07-12 17:09:05.838795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.838979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.838995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.839561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.839576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153e610 is same with the state(5) to be set 00:20:06.247 [2024-07-12 17:09:05.841790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.841842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.841876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.841907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.841938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.841969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.841984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.842015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.842052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.842085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.842116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.247 [2024-07-12 17:09:05.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.247 [2024-07-12 17:09:05.842170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.842976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.248 [2024-07-12 17:09:05.843503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.248 [2024-07-12 17:09:05.843519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.249 [2024-07-12 17:09:05.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.249 [2024-07-12 17:09:05.843844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401f90 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.845942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.249 [2024-07-12 17:09:05.845976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:06.249 [2024-07-12 17:09:05.845995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:06.249 [2024-07-12 17:09:05.846013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:06.249 [2024-07-12 17:09:05.846136] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.846164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.846190] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.846292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:06.249 [2024-07-12 17:09:05.846316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:06.249 task offset: 17280 on job bdev=Nvme3n1 fails 00:20:06.249 00:20:06.249 Latency(us) 00:20:06.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme1n1 ended in about 0.67 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme1n1 : 0.67 191.77 11.99 95.89 0.00 219142.83 18835.53 264085.81 00:20:06.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme2n1 ended in about 0.66 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme2n1 : 0.66 195.24 12.20 97.62 0.00 209136.70 19320.98 242337.56 00:20:06.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme3n1 ended in about 0.65 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme3n1 : 0.65 196.24 12.26 98.12 0.00 201933.05 15728.64 251658.24 00:20:06.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme4n1 ended in about 0.65 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme4n1 : 0.65 195.89 12.24 97.95 0.00 196176.66 14175.19 253211.69 00:20:06.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme5n1 ended in about 0.67 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme5n1 : 0.67 102.90 6.43 87.98 0.00 293508.74 34952.53 222142.77 00:20:06.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme6n1 ended in about 0.67 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme6n1 : 0.67 94.97 5.94 94.97 0.00 287016.58 23204.60 236123.78 00:20:06.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme7n1 ended in about 0.68 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme7n1 : 0.68 94.51 5.91 94.51 0.00 279615.72 49710.27 262532.36 00:20:06.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme8n1 ended in about 0.68 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme8n1 : 0.68 101.40 6.34 94.05 0.00 262278.32 54370.61 228356.55 00:20:06.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme9n1 ended in about 0.68 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme9n1 : 0.68 93.59 5.85 93.59 0.00 265324.09 21748.24 279620.27 00:20:06.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.249 Job: Nvme10n1 ended in about 0.69 seconds with error 00:20:06.249 Verification LBA range: start 0x0 length 0x400 00:20:06.249 Nvme10n1 : 0.69 101.74 6.36 93.02 0.00 247056.59 18932.62 285834.05 00:20:06.249 =================================================================================================================== 00:20:06.249 Total : 1368.25 85.52 947.70 0.00 239634.69 14175.19 285834.05 00:20:06.249 [2024-07-12 17:09:05.874076] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.249 [2024-07-12 17:09:05.874162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:06.249 [2024-07-12 17:09:05.874482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.874518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf87200 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.874540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf87200 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.874674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.874701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0880 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.874718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0880 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.874845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.874872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13dd850 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.874888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd850 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.875006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.875033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9b610 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.875049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b610 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.877029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:06.249 [2024-07-12 17:09:05.877069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:06.249 [2024-07-12 17:09:05.877273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.877316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1558b00 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.877334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558b00 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.877481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1558920 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.877498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558920 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.877711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.877768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1449690 with addr=10.0.0.2, port=4420 00:20:06.249 [2024-07-12 17:09:05.877787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1449690 is same with the state(5) to be set 00:20:06.249 [2024-07-12 17:09:05.877813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf87200 (9): Bad file descriptor 00:20:06.249 [2024-07-12 17:09:05.877835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0880 (9): Bad file descriptor 00:20:06.249 [2024-07-12 17:09:05.877854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dd850 (9): Bad file descriptor 00:20:06.249 [2024-07-12 17:09:05.877872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9b610 (9): Bad file descriptor 00:20:06.249 [2024-07-12 17:09:05.877922] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.877950] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.877977] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.877998] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.878017] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.249 [2024-07-12 17:09:05.878096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:06.249 [2024-07-12 17:09:05.878259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.249 [2024-07-12 17:09:05.878288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1565980 with addr=10.0.0.2, port=4420 00:20:06.250 [2024-07-12 17:09:05.878304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1565980 is same with the state(5) to be set 00:20:06.250 [2024-07-12 17:09:05.878414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.250 [2024-07-12 17:09:05.878441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bdc90 with addr=10.0.0.2, port=4420 00:20:06.250 [2024-07-12 17:09:05.878456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdc90 is same with the state(5) to be set 00:20:06.250 [2024-07-12 17:09:05.878475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558b00 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.878494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558920 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.878512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1449690 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.878528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.878542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.878563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.250 [2024-07-12 17:09:05.878584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.878600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.878613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:06.250 [2024-07-12 17:09:05.878629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.878642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.878656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:06.250 [2024-07-12 17:09:05.878681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.878695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.878709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:06.250 [2024-07-12 17:09:05.878814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.878836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.878849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.878860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.250 [2024-07-12 17:09:05.879033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5eb0 with addr=10.0.0.2, port=4420 00:20:06.250 [2024-07-12 17:09:05.879049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5eb0 is same with the state(5) to be set 00:20:06.250 [2024-07-12 17:09:05.879068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1565980 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.879087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdc90 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.879103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5eb0 (9): Bad file descriptor 00:20:06.250 [2024-07-12 17:09:05.879338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.250 [2024-07-12 17:09:05.879476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:06.250 [2024-07-12 17:09:05.879488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:06.250 [2024-07-12 17:09:05.879502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:06.250 [2024-07-12 17:09:05.879536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.818 17:09:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:06.818 17:09:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:07.755 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1167620 00:20:07.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1167620) - No such process 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.756 rmmod nvme_tcp 00:20:07.756 rmmod nvme_fabrics 00:20:07.756 rmmod nvme_keyring 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.756 17:09:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.295 00:20:10.295 real 0m7.207s 00:20:10.295 user 0m17.034s 00:20:10.295 sys 0m1.368s 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.295 ************************************ 00:20:10.295 END TEST nvmf_shutdown_tc3 00:20:10.295 ************************************ 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:10.295 00:20:10.295 real 0m27.373s 00:20:10.295 user 1m16.311s 00:20:10.295 sys 0m6.345s 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.295 17:09:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.295 ************************************ 00:20:10.295 END TEST nvmf_shutdown 00:20:10.295 ************************************ 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.295 17:09:09 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.295 17:09:09 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.295 17:09:09 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:10.295 17:09:09 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.295 17:09:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.295 ************************************ 00:20:10.295 START TEST nvmf_multicontroller 00:20:10.295 ************************************ 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:10.295 * Looking for test storage... 00:20:10.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.295 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.296 17:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.197 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:12.198 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:12.198 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:12.198 Found net devices under 0000:84:00.0: cvl_0_0 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:12.198 Found net devices under 0000:84:00.1: cvl_0_1 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:12.198 00:20:12.198 --- 10.0.0.2 ping statistics --- 00:20:12.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.198 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:20:12.198 00:20:12.198 --- 10.0.0.1 ping statistics --- 00:20:12.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.198 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1170657 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1170657 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1170657 ']' 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.198 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.199 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.199 17:09:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.457 [2024-07-12 17:09:11.897128] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:12.457 [2024-07-12 17:09:11.897226] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.457 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.457 [2024-07-12 17:09:11.961657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.457 [2024-07-12 17:09:12.073722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.457 [2024-07-12 17:09:12.073806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.457 [2024-07-12 17:09:12.073835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.457 [2024-07-12 17:09:12.073847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.457 [2024-07-12 17:09:12.073857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.457 [2024-07-12 17:09:12.073944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.457 [2024-07-12 17:09:12.073977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.457 [2024-07-12 17:09:12.073980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 [2024-07-12 17:09:12.222987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 Malloc0 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 [2024-07-12 17:09:12.292824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 [2024-07-12 17:09:12.300706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 Malloc1 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1170678 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1170678 /var/tmp/bdevperf.sock 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1170678 ']' 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:12.715 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.716 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.716 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.716 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.281 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.281 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:13.281 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.281 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.281 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 NVMe0n1 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 17:09:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.539 1 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 request: 00:20:13.539 { 00:20:13.539 "name": "NVMe0", 00:20:13.539 "trtype": "tcp", 00:20:13.539 "traddr": "10.0.0.2", 00:20:13.539 "adrfam": "ipv4", 00:20:13.539 "trsvcid": "4420", 00:20:13.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.539 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:13.539 "hostaddr": "10.0.0.2", 00:20:13.539 "hostsvcid": "60000", 00:20:13.539 "prchk_reftag": false, 00:20:13.539 "prchk_guard": false, 00:20:13.539 "hdgst": false, 00:20:13.539 "ddgst": false, 00:20:13.539 "method": "bdev_nvme_attach_controller", 00:20:13.539 "req_id": 1 00:20:13.539 } 00:20:13.539 Got JSON-RPC error response 00:20:13.539 response: 00:20:13.539 { 00:20:13.539 "code": -114, 00:20:13.539 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.539 } 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 request: 00:20:13.539 { 00:20:13.539 "name": "NVMe0", 00:20:13.539 "trtype": "tcp", 00:20:13.539 "traddr": "10.0.0.2", 00:20:13.539 "adrfam": "ipv4", 00:20:13.539 "trsvcid": "4420", 00:20:13.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:13.539 "hostaddr": "10.0.0.2", 00:20:13.539 "hostsvcid": "60000", 00:20:13.539 "prchk_reftag": false, 00:20:13.539 "prchk_guard": false, 00:20:13.539 "hdgst": false, 00:20:13.539 "ddgst": false, 00:20:13.539 "method": "bdev_nvme_attach_controller", 00:20:13.539 "req_id": 1 00:20:13.539 } 00:20:13.539 Got JSON-RPC error response 00:20:13.539 response: 00:20:13.539 { 00:20:13.539 "code": -114, 00:20:13.539 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.539 } 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 request: 00:20:13.539 { 00:20:13.539 "name": "NVMe0", 00:20:13.539 "trtype": "tcp", 00:20:13.539 "traddr": "10.0.0.2", 00:20:13.539 "adrfam": "ipv4", 00:20:13.539 "trsvcid": "4420", 00:20:13.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.539 "hostaddr": "10.0.0.2", 00:20:13.539 "hostsvcid": "60000", 00:20:13.539 "prchk_reftag": false, 00:20:13.539 "prchk_guard": false, 00:20:13.539 "hdgst": false, 00:20:13.539 "ddgst": false, 00:20:13.539 "multipath": "disable", 00:20:13.539 "method": "bdev_nvme_attach_controller", 00:20:13.539 "req_id": 1 00:20:13.539 } 00:20:13.539 Got JSON-RPC error response 00:20:13.539 response: 00:20:13.539 { 00:20:13.539 "code": -114, 00:20:13.539 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:13.539 } 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.539 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 request: 00:20:13.539 { 00:20:13.539 "name": "NVMe0", 00:20:13.539 "trtype": "tcp", 00:20:13.539 "traddr": "10.0.0.2", 00:20:13.539 "adrfam": "ipv4", 00:20:13.539 "trsvcid": "4420", 00:20:13.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.539 "hostaddr": "10.0.0.2", 00:20:13.539 "hostsvcid": "60000", 00:20:13.539 "prchk_reftag": false, 00:20:13.539 "prchk_guard": false, 00:20:13.540 "hdgst": false, 00:20:13.540 "ddgst": false, 00:20:13.540 "multipath": "failover", 00:20:13.540 "method": "bdev_nvme_attach_controller", 00:20:13.540 "req_id": 1 00:20:13.540 } 00:20:13.540 Got JSON-RPC error response 00:20:13.540 response: 00:20:13.540 { 00:20:13.540 "code": -114, 00:20:13.540 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.540 } 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.540 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.797 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.797 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:13.797 17:09:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.166 0 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1170678 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1170678 ']' 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1170678 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170678 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170678' 00:20:15.166 killing process with pid 1170678 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1170678 00:20:15.166 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1170678 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:15.423 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:15.423 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:15.423 [2024-07-12 17:09:12.406888] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:15.423 [2024-07-12 17:09:12.406983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170678 ] 00:20:15.423 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.423 [2024-07-12 17:09:12.468275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.423 [2024-07-12 17:09:12.584136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.423 [2024-07-12 17:09:13.458157] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 31c68cc4-915b-434d-ad69-fa4d946c8cc4 already exists 00:20:15.423 [2024-07-12 17:09:13.458202] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:31c68cc4-915b-434d-ad69-fa4d946c8cc4 alias for bdev NVMe1n1 00:20:15.423 [2024-07-12 17:09:13.458218] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:15.423 Running I/O for 1 seconds... 00:20:15.423 00:20:15.423 Latency(us) 00:20:15.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.423 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:15.424 NVMe0n1 : 1.01 19164.20 74.86 0.00 0.00 6668.78 2560.76 11747.93 00:20:15.424 =================================================================================================================== 00:20:15.424 Total : 19164.20 74.86 0.00 0.00 6668.78 2560.76 11747.93 00:20:15.424 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.424 00:20:15.424 Latency(us) 00:20:15.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.424 =================================================================================================================== 00:20:15.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.424 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.424 rmmod nvme_tcp 00:20:15.424 rmmod nvme_fabrics 00:20:15.424 rmmod nvme_keyring 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1170657 ']' 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1170657 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1170657 ']' 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1170657 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.424 17:09:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170657 00:20:15.424 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.424 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.424 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170657' 00:20:15.424 killing process with pid 1170657 00:20:15.424 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1170657 00:20:15.424 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1170657 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.681 17:09:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.212 17:09:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.212 00:20:18.212 real 0m7.846s 00:20:18.212 user 0m12.933s 00:20:18.212 sys 0m2.351s 00:20:18.212 17:09:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.212 17:09:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:18.212 ************************************ 00:20:18.212 END TEST nvmf_multicontroller 00:20:18.212 ************************************ 00:20:18.212 17:09:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:18.212 17:09:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:18.212 17:09:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:18.212 17:09:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.212 17:09:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.212 ************************************ 00:20:18.212 START TEST nvmf_aer 00:20:18.212 ************************************ 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:18.212 * Looking for test storage... 00:20:18.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.212 17:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.111 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:20.112 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:20.112 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:20.112 Found net devices under 0000:84:00.0: cvl_0_0 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:20.112 Found net devices under 0000:84:00.1: cvl_0_1 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:20:20.112 00:20:20.112 --- 10.0.0.2 ping statistics --- 00:20:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.112 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:20:20.112 00:20:20.112 --- 10.0.0.1 ping statistics --- 00:20:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.112 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1173030 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1173030 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1173030 ']' 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.112 17:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.112 [2024-07-12 17:09:19.792137] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:20.112 [2024-07-12 17:09:19.792216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.371 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.371 [2024-07-12 17:09:19.856978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.371 [2024-07-12 17:09:19.967995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.371 [2024-07-12 17:09:19.968060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.371 [2024-07-12 17:09:19.968074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.371 [2024-07-12 17:09:19.968085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.371 [2024-07-12 17:09:19.968109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.371 [2024-07-12 17:09:19.968224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.371 [2024-07-12 17:09:19.968286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.371 [2024-07-12 17:09:19.968315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.371 [2024-07-12 17:09:19.968317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.628 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.628 [2024-07-12 17:09:20.138573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.629 Malloc0 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.629 [2024-07-12 17:09:20.192471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.629 [ 00:20:20.629 { 00:20:20.629 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.629 "subtype": "Discovery", 00:20:20.629 "listen_addresses": [], 00:20:20.629 "allow_any_host": true, 00:20:20.629 "hosts": [] 00:20:20.629 }, 00:20:20.629 { 00:20:20.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.629 "subtype": "NVMe", 00:20:20.629 "listen_addresses": [ 00:20:20.629 { 00:20:20.629 "trtype": "TCP", 00:20:20.629 "adrfam": "IPv4", 00:20:20.629 "traddr": "10.0.0.2", 00:20:20.629 "trsvcid": "4420" 00:20:20.629 } 00:20:20.629 ], 00:20:20.629 "allow_any_host": true, 00:20:20.629 "hosts": [], 00:20:20.629 "serial_number": "SPDK00000000000001", 00:20:20.629 "model_number": "SPDK bdev Controller", 00:20:20.629 "max_namespaces": 2, 00:20:20.629 "min_cntlid": 1, 00:20:20.629 "max_cntlid": 65519, 00:20:20.629 "namespaces": [ 00:20:20.629 { 00:20:20.629 "nsid": 1, 00:20:20.629 "bdev_name": "Malloc0", 00:20:20.629 "name": "Malloc0", 00:20:20.629 "nguid": "31D924179D364296909A68FF6852203F", 00:20:20.629 "uuid": "31d92417-9d36-4296-909a-68ff6852203f" 00:20:20.629 } 00:20:20.629 ] 00:20:20.629 } 00:20:20.629 ] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1173060 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:20.629 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:20.629 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 Malloc1 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.887 [ 00:20:20.887 { 00:20:20.887 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.887 "subtype": "Discovery", 00:20:20.887 "listen_addresses": [], 00:20:20.887 "allow_any_host": true, 00:20:20.887 "hosts": [] 00:20:20.887 }, 00:20:20.887 { 00:20:20.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.887 "subtype": "NVMe", 00:20:20.887 "listen_addresses": [ 00:20:20.887 { 00:20:20.887 "trtype": "TCP", 00:20:20.887 "adrfam": "IPv4", 00:20:20.887 "traddr": "10.0.0.2", 00:20:20.887 "trsvcid": "4420" 00:20:20.887 } 00:20:20.887 ], 00:20:20.887 "allow_any_host": true, 00:20:20.887 "hosts": [], 00:20:20.887 "serial_number": "SPDK00000000000001", 00:20:20.887 "model_number": "SPDK bdev Controller", 00:20:20.887 "max_namespaces": 2, 00:20:20.887 "min_cntlid": 1, 00:20:20.887 "max_cntlid": 65519, 00:20:20.887 "namespaces": [ 00:20:20.887 { 00:20:20.887 "nsid": 1, 00:20:20.887 "bdev_name": "Malloc0", 00:20:20.887 "name": "Malloc0", 00:20:20.887 "nguid": "31D924179D364296909A68FF6852203F", 00:20:20.887 "uuid": "31d92417-9d36-4296-909a-68ff6852203f" 00:20:20.887 }, 00:20:20.887 { 00:20:20.887 "nsid": 2, 00:20:20.887 "bdev_name": "Malloc1", 00:20:20.887 "name": "Malloc1", 00:20:20.887 "nguid": "5781BFA557D54D0ABCA9D9586A16D99B", 00:20:20.887 "uuid": "5781bfa5-57d5-4d0a-bca9-d9586a16d99b" 00:20:20.887 } 00:20:20.887 ] 00:20:20.887 } 00:20:20.887 ] 00:20:20.887 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1173060 00:20:20.888 Asynchronous Event Request test 00:20:20.888 Attaching to 10.0.0.2 00:20:20.888 Attached to 10.0.0.2 00:20:20.888 Registering asynchronous event callbacks... 00:20:20.888 Starting namespace attribute notice tests for all controllers... 00:20:20.888 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.888 aer_cb - Changed Namespace 00:20:20.888 Cleaning up... 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.888 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.888 rmmod nvme_tcp 00:20:21.145 rmmod nvme_fabrics 00:20:21.145 rmmod nvme_keyring 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1173030 ']' 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1173030 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1173030 ']' 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1173030 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173030 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173030' 00:20:21.145 killing process with pid 1173030 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1173030 00:20:21.145 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1173030 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.404 17:09:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.313 17:09:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.313 00:20:23.313 real 0m5.558s 00:20:23.313 user 0m4.368s 00:20:23.313 sys 0m2.046s 00:20:23.313 17:09:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.313 17:09:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:23.313 ************************************ 00:20:23.313 END TEST nvmf_aer 00:20:23.313 ************************************ 00:20:23.313 17:09:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.313 17:09:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:23.314 17:09:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.314 17:09:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.314 17:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.573 ************************************ 00:20:23.573 START TEST nvmf_async_init 00:20:23.573 ************************************ 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:23.573 * Looking for test storage... 00:20:23.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f46c0846d29a4a30acafc5d56d3fcb05 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.573 17:09:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:26.101 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:26.101 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:26.101 Found net devices under 0000:84:00.0: cvl_0_0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:26.101 Found net devices under 0000:84:00.1: cvl_0_1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:20:26.101 00:20:26.101 --- 10.0.0.2 ping statistics --- 00:20:26.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.101 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:26.101 00:20:26.101 --- 10.0.0.1 ping statistics --- 00:20:26.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.101 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1175130 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1175130 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1175130 ']' 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 [2024-07-12 17:09:25.438216] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:26.101 [2024-07-12 17:09:25.438284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.101 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.101 [2024-07-12 17:09:25.501548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.101 [2024-07-12 17:09:25.603898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.101 [2024-07-12 17:09:25.603959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.101 [2024-07-12 17:09:25.603987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.101 [2024-07-12 17:09:25.603999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.101 [2024-07-12 17:09:25.604009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.101 [2024-07-12 17:09:25.604039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 [2024-07-12 17:09:25.732974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 null0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f46c0846d29a4a30acafc5d56d3fcb05 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.101 [2024-07-12 17:09:25.773226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.101 17:09:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.387 nvme0n1 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.387 [ 00:20:26.387 { 00:20:26.387 "name": "nvme0n1", 00:20:26.387 "aliases": [ 00:20:26.387 "f46c0846-d29a-4a30-acaf-c5d56d3fcb05" 00:20:26.387 ], 00:20:26.387 "product_name": "NVMe disk", 00:20:26.387 "block_size": 512, 00:20:26.387 "num_blocks": 2097152, 00:20:26.387 "uuid": "f46c0846-d29a-4a30-acaf-c5d56d3fcb05", 00:20:26.387 "assigned_rate_limits": { 00:20:26.387 "rw_ios_per_sec": 0, 00:20:26.387 "rw_mbytes_per_sec": 0, 00:20:26.387 "r_mbytes_per_sec": 0, 00:20:26.387 "w_mbytes_per_sec": 0 00:20:26.387 }, 00:20:26.387 "claimed": false, 00:20:26.387 "zoned": false, 00:20:26.387 "supported_io_types": { 00:20:26.387 "read": true, 00:20:26.387 "write": true, 00:20:26.387 "unmap": false, 00:20:26.387 "flush": true, 00:20:26.387 "reset": true, 00:20:26.387 "nvme_admin": true, 00:20:26.387 "nvme_io": true, 00:20:26.387 "nvme_io_md": false, 00:20:26.387 "write_zeroes": true, 00:20:26.387 "zcopy": false, 00:20:26.387 "get_zone_info": false, 00:20:26.387 "zone_management": false, 00:20:26.387 "zone_append": false, 00:20:26.387 "compare": true, 00:20:26.387 "compare_and_write": true, 00:20:26.387 "abort": true, 00:20:26.387 "seek_hole": false, 00:20:26.387 "seek_data": false, 00:20:26.387 "copy": true, 00:20:26.387 "nvme_iov_md": false 00:20:26.387 }, 00:20:26.387 "memory_domains": [ 00:20:26.387 { 00:20:26.387 "dma_device_id": "system", 00:20:26.387 "dma_device_type": 1 00:20:26.387 } 00:20:26.387 ], 00:20:26.387 "driver_specific": { 00:20:26.387 "nvme": [ 00:20:26.387 { 00:20:26.387 "trid": { 00:20:26.387 "trtype": "TCP", 00:20:26.387 "adrfam": "IPv4", 00:20:26.387 "traddr": "10.0.0.2", 00:20:26.387 "trsvcid": "4420", 00:20:26.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:26.387 }, 00:20:26.387 "ctrlr_data": { 00:20:26.387 "cntlid": 1, 00:20:26.387 "vendor_id": "0x8086", 00:20:26.387 "model_number": "SPDK bdev Controller", 00:20:26.387 "serial_number": "00000000000000000000", 00:20:26.387 "firmware_revision": "24.09", 00:20:26.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.387 "oacs": { 00:20:26.387 "security": 0, 00:20:26.387 "format": 0, 00:20:26.387 "firmware": 0, 00:20:26.387 "ns_manage": 0 00:20:26.387 }, 00:20:26.387 "multi_ctrlr": true, 00:20:26.387 "ana_reporting": false 00:20:26.387 }, 00:20:26.387 "vs": { 00:20:26.387 "nvme_version": "1.3" 00:20:26.387 }, 00:20:26.387 "ns_data": { 00:20:26.387 "id": 1, 00:20:26.387 "can_share": true 00:20:26.387 } 00:20:26.387 } 00:20:26.387 ], 00:20:26.387 "mp_policy": "active_passive" 00:20:26.387 } 00:20:26.387 } 00:20:26.387 ] 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.387 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.387 [2024-07-12 17:09:26.021989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:26.387 [2024-07-12 17:09:26.022090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6f5c0 (9): Bad file descriptor 00:20:26.645 [2024-07-12 17:09:26.153864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 [ 00:20:26.645 { 00:20:26.645 "name": "nvme0n1", 00:20:26.645 "aliases": [ 00:20:26.645 "f46c0846-d29a-4a30-acaf-c5d56d3fcb05" 00:20:26.645 ], 00:20:26.645 "product_name": "NVMe disk", 00:20:26.645 "block_size": 512, 00:20:26.645 "num_blocks": 2097152, 00:20:26.645 "uuid": "f46c0846-d29a-4a30-acaf-c5d56d3fcb05", 00:20:26.645 "assigned_rate_limits": { 00:20:26.645 "rw_ios_per_sec": 0, 00:20:26.645 "rw_mbytes_per_sec": 0, 00:20:26.645 "r_mbytes_per_sec": 0, 00:20:26.645 "w_mbytes_per_sec": 0 00:20:26.645 }, 00:20:26.645 "claimed": false, 00:20:26.645 "zoned": false, 00:20:26.645 "supported_io_types": { 00:20:26.645 "read": true, 00:20:26.645 "write": true, 00:20:26.645 "unmap": false, 00:20:26.645 "flush": true, 00:20:26.645 "reset": true, 00:20:26.645 "nvme_admin": true, 00:20:26.645 "nvme_io": true, 00:20:26.645 "nvme_io_md": false, 00:20:26.645 "write_zeroes": true, 00:20:26.645 "zcopy": false, 00:20:26.645 "get_zone_info": false, 00:20:26.645 "zone_management": false, 00:20:26.645 "zone_append": false, 00:20:26.645 "compare": true, 00:20:26.645 "compare_and_write": true, 00:20:26.645 "abort": true, 00:20:26.645 "seek_hole": false, 00:20:26.645 "seek_data": false, 00:20:26.645 "copy": true, 00:20:26.645 "nvme_iov_md": false 00:20:26.645 }, 00:20:26.645 "memory_domains": [ 00:20:26.645 { 00:20:26.645 "dma_device_id": "system", 00:20:26.645 "dma_device_type": 1 00:20:26.645 } 00:20:26.645 ], 00:20:26.645 "driver_specific": { 00:20:26.645 "nvme": [ 00:20:26.645 { 00:20:26.645 "trid": { 00:20:26.645 "trtype": "TCP", 00:20:26.645 "adrfam": "IPv4", 00:20:26.645 "traddr": "10.0.0.2", 00:20:26.645 "trsvcid": "4420", 00:20:26.645 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:26.645 }, 00:20:26.645 "ctrlr_data": { 00:20:26.645 "cntlid": 2, 00:20:26.645 "vendor_id": "0x8086", 00:20:26.645 "model_number": "SPDK bdev Controller", 00:20:26.645 "serial_number": "00000000000000000000", 00:20:26.645 "firmware_revision": "24.09", 00:20:26.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.645 "oacs": { 00:20:26.645 "security": 0, 00:20:26.645 "format": 0, 00:20:26.645 "firmware": 0, 00:20:26.645 "ns_manage": 0 00:20:26.645 }, 00:20:26.645 "multi_ctrlr": true, 00:20:26.645 "ana_reporting": false 00:20:26.645 }, 00:20:26.645 "vs": { 00:20:26.645 "nvme_version": "1.3" 00:20:26.645 }, 00:20:26.645 "ns_data": { 00:20:26.645 "id": 1, 00:20:26.645 "can_share": true 00:20:26.645 } 00:20:26.645 } 00:20:26.645 ], 00:20:26.645 "mp_policy": "active_passive" 00:20:26.645 } 00:20:26.645 } 00:20:26.645 ] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.MmpNFf9waD 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.MmpNFf9waD 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 [2024-07-12 17:09:26.198667] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.645 [2024-07-12 17:09:26.198835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MmpNFf9waD 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 [2024-07-12 17:09:26.206680] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MmpNFf9waD 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 [2024-07-12 17:09:26.214706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.645 [2024-07-12 17:09:26.214806] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.645 nvme0n1 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 [ 00:20:26.645 { 00:20:26.645 "name": "nvme0n1", 00:20:26.645 "aliases": [ 00:20:26.645 "f46c0846-d29a-4a30-acaf-c5d56d3fcb05" 00:20:26.645 ], 00:20:26.645 "product_name": "NVMe disk", 00:20:26.645 "block_size": 512, 00:20:26.645 "num_blocks": 2097152, 00:20:26.645 "uuid": "f46c0846-d29a-4a30-acaf-c5d56d3fcb05", 00:20:26.645 "assigned_rate_limits": { 00:20:26.645 "rw_ios_per_sec": 0, 00:20:26.645 "rw_mbytes_per_sec": 0, 00:20:26.645 "r_mbytes_per_sec": 0, 00:20:26.645 "w_mbytes_per_sec": 0 00:20:26.645 }, 00:20:26.645 "claimed": false, 00:20:26.645 "zoned": false, 00:20:26.645 "supported_io_types": { 00:20:26.645 "read": true, 00:20:26.645 "write": true, 00:20:26.645 "unmap": false, 00:20:26.645 "flush": true, 00:20:26.645 "reset": true, 00:20:26.645 "nvme_admin": true, 00:20:26.645 "nvme_io": true, 00:20:26.645 "nvme_io_md": false, 00:20:26.645 "write_zeroes": true, 00:20:26.645 "zcopy": false, 00:20:26.645 "get_zone_info": false, 00:20:26.645 "zone_management": false, 00:20:26.645 "zone_append": false, 00:20:26.645 "compare": true, 00:20:26.645 "compare_and_write": true, 00:20:26.645 "abort": true, 00:20:26.645 "seek_hole": false, 00:20:26.645 "seek_data": false, 00:20:26.645 "copy": true, 00:20:26.645 "nvme_iov_md": false 00:20:26.645 }, 00:20:26.645 "memory_domains": [ 00:20:26.645 { 00:20:26.645 "dma_device_id": "system", 00:20:26.645 "dma_device_type": 1 00:20:26.645 } 00:20:26.645 ], 00:20:26.645 "driver_specific": { 00:20:26.645 "nvme": [ 00:20:26.645 { 00:20:26.645 "trid": { 00:20:26.645 "trtype": "TCP", 00:20:26.645 "adrfam": "IPv4", 00:20:26.645 "traddr": "10.0.0.2", 00:20:26.645 "trsvcid": "4421", 00:20:26.645 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:26.645 }, 00:20:26.645 "ctrlr_data": { 00:20:26.645 "cntlid": 3, 00:20:26.645 "vendor_id": "0x8086", 00:20:26.645 "model_number": "SPDK bdev Controller", 00:20:26.645 "serial_number": "00000000000000000000", 00:20:26.645 "firmware_revision": "24.09", 00:20:26.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.645 "oacs": { 00:20:26.645 "security": 0, 00:20:26.645 "format": 0, 00:20:26.645 "firmware": 0, 00:20:26.645 "ns_manage": 0 00:20:26.645 }, 00:20:26.645 "multi_ctrlr": true, 00:20:26.645 "ana_reporting": false 00:20:26.645 }, 00:20:26.645 "vs": { 00:20:26.645 "nvme_version": "1.3" 00:20:26.645 }, 00:20:26.645 "ns_data": { 00:20:26.645 "id": 1, 00:20:26.645 "can_share": true 00:20:26.645 } 00:20:26.645 } 00:20:26.645 ], 00:20:26.645 "mp_policy": "active_passive" 00:20:26.645 } 00:20:26.645 } 00:20:26.645 ] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.MmpNFf9waD 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.645 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.645 rmmod nvme_tcp 00:20:26.903 rmmod nvme_fabrics 00:20:26.903 rmmod nvme_keyring 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1175130 ']' 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1175130 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1175130 ']' 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1175130 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175130 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175130' 00:20:26.903 killing process with pid 1175130 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1175130 00:20:26.903 [2024-07-12 17:09:26.407228] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.903 [2024-07-12 17:09:26.407260] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:26.903 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1175130 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.160 17:09:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.059 17:09:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.059 00:20:29.059 real 0m5.661s 00:20:29.059 user 0m2.137s 00:20:29.059 sys 0m1.904s 00:20:29.059 17:09:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.059 17:09:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:29.059 ************************************ 00:20:29.059 END TEST nvmf_async_init 00:20:29.059 ************************************ 00:20:29.059 17:09:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:29.059 17:09:28 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:29.059 17:09:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:29.059 17:09:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.059 17:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.059 ************************************ 00:20:29.059 START TEST dma 00:20:29.059 ************************************ 00:20:29.059 17:09:28 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:29.316 * Looking for test storage... 00:20:29.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.316 17:09:28 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.316 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.316 17:09:28 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.316 17:09:28 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.316 17:09:28 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.316 17:09:28 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.316 17:09:28 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.316 17:09:28 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.316 17:09:28 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:29.317 17:09:28 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.317 17:09:28 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.317 17:09:28 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:29.317 17:09:28 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:29.317 00:20:29.317 real 0m0.068s 00:20:29.317 user 0m0.032s 00:20:29.317 sys 0m0.041s 00:20:29.317 17:09:28 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.317 17:09:28 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:29.317 ************************************ 00:20:29.317 END TEST dma 00:20:29.317 ************************************ 00:20:29.317 17:09:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:29.317 17:09:28 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:29.317 17:09:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:29.317 17:09:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.317 17:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.317 ************************************ 00:20:29.317 START TEST nvmf_identify 00:20:29.317 ************************************ 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:29.317 * Looking for test storage... 00:20:29.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.317 17:09:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:31.842 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.842 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:31.842 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:31.843 Found net devices under 0000:84:00.0: cvl_0_0 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:31.843 Found net devices under 0000:84:00.1: cvl_0_1 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:20:31.843 00:20:31.843 --- 10.0.0.2 ping statistics --- 00:20:31.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.843 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:20:31.843 00:20:31.843 --- 10.0.0.1 ping statistics --- 00:20:31.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.843 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1177277 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1177277 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1177277 ']' 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.843 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:31.843 [2024-07-12 17:09:31.270872] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:31.843 [2024-07-12 17:09:31.270953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.843 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.843 [2024-07-12 17:09:31.334213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.843 [2024-07-12 17:09:31.438061] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.843 [2024-07-12 17:09:31.438119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.843 [2024-07-12 17:09:31.438146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.843 [2024-07-12 17:09:31.438157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.843 [2024-07-12 17:09:31.438167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.843 [2024-07-12 17:09:31.438249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.843 [2024-07-12 17:09:31.438355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.843 [2024-07-12 17:09:31.438415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.843 [2024-07-12 17:09:31.438417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 [2024-07-12 17:09:31.581592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 Malloc0 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 [2024-07-12 17:09:31.663163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.102 [ 00:20:32.102 { 00:20:32.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:32.102 "subtype": "Discovery", 00:20:32.102 "listen_addresses": [ 00:20:32.102 { 00:20:32.102 "trtype": "TCP", 00:20:32.102 "adrfam": "IPv4", 00:20:32.102 "traddr": "10.0.0.2", 00:20:32.102 "trsvcid": "4420" 00:20:32.102 } 00:20:32.102 ], 00:20:32.102 "allow_any_host": true, 00:20:32.102 "hosts": [] 00:20:32.102 }, 00:20:32.102 { 00:20:32.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.102 "subtype": "NVMe", 00:20:32.102 "listen_addresses": [ 00:20:32.102 { 00:20:32.102 "trtype": "TCP", 00:20:32.102 "adrfam": "IPv4", 00:20:32.102 "traddr": "10.0.0.2", 00:20:32.102 "trsvcid": "4420" 00:20:32.102 } 00:20:32.102 ], 00:20:32.102 "allow_any_host": true, 00:20:32.102 "hosts": [], 00:20:32.102 "serial_number": "SPDK00000000000001", 00:20:32.102 "model_number": "SPDK bdev Controller", 00:20:32.102 "max_namespaces": 32, 00:20:32.102 "min_cntlid": 1, 00:20:32.102 "max_cntlid": 65519, 00:20:32.102 "namespaces": [ 00:20:32.102 { 00:20:32.102 "nsid": 1, 00:20:32.102 "bdev_name": "Malloc0", 00:20:32.102 "name": "Malloc0", 00:20:32.102 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:32.102 "eui64": "ABCDEF0123456789", 00:20:32.102 "uuid": "36be0a36-b32f-4700-9fc1-5a029f4ab3cf" 00:20:32.102 } 00:20:32.102 ] 00:20:32.102 } 00:20:32.102 ] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.102 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:32.102 [2024-07-12 17:09:31.705578] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:32.102 [2024-07-12 17:09:31.705623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177300 ] 00:20:32.102 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.102 [2024-07-12 17:09:31.741172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:32.102 [2024-07-12 17:09:31.741239] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:32.102 [2024-07-12 17:09:31.741249] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:32.102 [2024-07-12 17:09:31.741265] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:32.102 [2024-07-12 17:09:31.741276] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:32.102 [2024-07-12 17:09:31.741629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:32.102 [2024-07-12 17:09:31.741685] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1361540 0 00:20:32.102 [2024-07-12 17:09:31.751757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:32.102 [2024-07-12 17:09:31.751780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:32.102 [2024-07-12 17:09:31.751789] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:32.102 [2024-07-12 17:09:31.751795] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:32.102 [2024-07-12 17:09:31.751853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.751867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.751879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.102 [2024-07-12 17:09:31.751909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:32.102 [2024-07-12 17:09:31.751936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.102 [2024-07-12 17:09:31.759752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.102 [2024-07-12 17:09:31.759770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.102 [2024-07-12 17:09:31.759778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.759786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.102 [2024-07-12 17:09:31.759803] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:32.102 [2024-07-12 17:09:31.759815] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:32.102 [2024-07-12 17:09:31.759825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:32.102 [2024-07-12 17:09:31.759849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.759858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.759864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.102 [2024-07-12 17:09:31.759875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.102 [2024-07-12 17:09:31.759899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.102 [2024-07-12 17:09:31.760078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.102 [2024-07-12 17:09:31.760092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.102 [2024-07-12 17:09:31.760099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.760114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.102 [2024-07-12 17:09:31.760123] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:32.102 [2024-07-12 17:09:31.760136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:32.102 [2024-07-12 17:09:31.760148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.760155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.102 [2024-07-12 17:09:31.760161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.760171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.760192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.760276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.103 [2024-07-12 17:09:31.760304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.103 [2024-07-12 17:09:31.760311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.103 [2024-07-12 17:09:31.760325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:32.103 [2024-07-12 17:09:31.760339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.760350] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.760377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.760397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.760478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.103 [2024-07-12 17:09:31.760491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.103 [2024-07-12 17:09:31.760497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.103 [2024-07-12 17:09:31.760512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.760528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.760552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.760572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.760649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.103 [2024-07-12 17:09:31.760662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.103 [2024-07-12 17:09:31.760668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.103 [2024-07-12 17:09:31.760682] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:32.103 [2024-07-12 17:09:31.760690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.760703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.760813] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:32.103 [2024-07-12 17:09:31.760823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.760837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.760851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.760861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.760882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.760983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.103 [2024-07-12 17:09:31.760996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.103 [2024-07-12 17:09:31.761002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.103 [2024-07-12 17:09:31.761021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:32.103 [2024-07-12 17:09:31.761041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.761081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.761101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.761181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.103 [2024-07-12 17:09:31.761194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.103 [2024-07-12 17:09:31.761200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.103 [2024-07-12 17:09:31.761214] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:32.103 [2024-07-12 17:09:31.761222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:32.103 [2024-07-12 17:09:31.761236] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:32.103 [2024-07-12 17:09:31.761249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:32.103 [2024-07-12 17:09:31.761265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.103 [2024-07-12 17:09:31.761282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.103 [2024-07-12 17:09:31.761302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.103 [2024-07-12 17:09:31.761442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.103 [2024-07-12 17:09:31.761456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.103 [2024-07-12 17:09:31.761462] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761477] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1361540): datao=0, datal=4096, cccid=0 00:20:32.103 [2024-07-12 17:09:31.761484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c13c0) on tqpair(0x1361540): expected_datao=0, payload_size=4096 00:20:32.103 [2024-07-12 17:09:31.761491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761508] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.103 [2024-07-12 17:09:31.761518] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.363 [2024-07-12 17:09:31.801861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.363 [2024-07-12 17:09:31.801882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.363 [2024-07-12 17:09:31.801891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.363 [2024-07-12 17:09:31.801898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.363 [2024-07-12 17:09:31.801911] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:32.363 [2024-07-12 17:09:31.801925] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:32.363 [2024-07-12 17:09:31.801934] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:32.363 [2024-07-12 17:09:31.801943] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:32.363 [2024-07-12 17:09:31.801955] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:32.363 [2024-07-12 17:09:31.801964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:32.363 [2024-07-12 17:09:31.801979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:32.363 [2024-07-12 17:09:31.801992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.363 [2024-07-12 17:09:31.802000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.363 [2024-07-12 17:09:31.802006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.363 [2024-07-12 17:09:31.802018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.363 [2024-07-12 17:09:31.802041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.363 [2024-07-12 17:09:31.802144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.363 [2024-07-12 17:09:31.802155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.363 [2024-07-12 17:09:31.802161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.802189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.364 [2024-07-12 17:09:31.802222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.364 [2024-07-12 17:09:31.802252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.364 [2024-07-12 17:09:31.802282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.364 [2024-07-12 17:09:31.802311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:32.364 [2024-07-12 17:09:31.802330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:32.364 [2024-07-12 17:09:31.802342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.364 [2024-07-12 17:09:31.802394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c13c0, cid 0, qid 0 00:20:32.364 [2024-07-12 17:09:31.802405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1540, cid 1, qid 0 00:20:32.364 [2024-07-12 17:09:31.802412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c16c0, cid 2, qid 0 00:20:32.364 [2024-07-12 17:09:31.802419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.364 [2024-07-12 17:09:31.802426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c19c0, cid 4, qid 0 00:20:32.364 [2024-07-12 17:09:31.802542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.802555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.802562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c19c0) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.802578] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:32.364 [2024-07-12 17:09:31.802586] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:32.364 [2024-07-12 17:09:31.802608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.364 [2024-07-12 17:09:31.802648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c19c0, cid 4, qid 0 00:20:32.364 [2024-07-12 17:09:31.802791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.364 [2024-07-12 17:09:31.802806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.364 [2024-07-12 17:09:31.802812] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802818] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1361540): datao=0, datal=4096, cccid=4 00:20:32.364 [2024-07-12 17:09:31.802825] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c19c0) on tqpair(0x1361540): expected_datao=0, payload_size=4096 00:20:32.364 [2024-07-12 17:09:31.802833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802842] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802850] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.802870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.802876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c19c0) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.802902] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:32.364 [2024-07-12 17:09:31.802940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.364 [2024-07-12 17:09:31.802973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.802986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.802995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.364 [2024-07-12 17:09:31.803024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c19c0, cid 4, qid 0 00:20:32.364 [2024-07-12 17:09:31.803036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1b40, cid 5, qid 0 00:20:32.364 [2024-07-12 17:09:31.803182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.364 [2024-07-12 17:09:31.803194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.364 [2024-07-12 17:09:31.803200] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.803207] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1361540): datao=0, datal=1024, cccid=4 00:20:32.364 [2024-07-12 17:09:31.803214] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c19c0) on tqpair(0x1361540): expected_datao=0, payload_size=1024 00:20:32.364 [2024-07-12 17:09:31.803221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.803229] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.803236] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.803244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.803252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.803259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.803265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1b40) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.846765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.846784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.846791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.846798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c19c0) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.846818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.846827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.846838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.364 [2024-07-12 17:09:31.846869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c19c0, cid 4, qid 0 00:20:32.364 [2024-07-12 17:09:31.846992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.364 [2024-07-12 17:09:31.847006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.364 [2024-07-12 17:09:31.847013] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.847019] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1361540): datao=0, datal=3072, cccid=4 00:20:32.364 [2024-07-12 17:09:31.847026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c19c0) on tqpair(0x1361540): expected_datao=0, payload_size=3072 00:20:32.364 [2024-07-12 17:09:31.847034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.847068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.847078] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.887827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.887857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.887864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.887871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c19c0) on tqpair=0x1361540 00:20:32.364 [2024-07-12 17:09:31.887886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.887895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1361540) 00:20:32.364 [2024-07-12 17:09:31.887906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.364 [2024-07-12 17:09:31.887940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c19c0, cid 4, qid 0 00:20:32.364 [2024-07-12 17:09:31.888049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.364 [2024-07-12 17:09:31.888060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.364 [2024-07-12 17:09:31.888066] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.888072] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1361540): datao=0, datal=8, cccid=4 00:20:32.364 [2024-07-12 17:09:31.888079] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c19c0) on tqpair(0x1361540): expected_datao=0, payload_size=8 00:20:32.364 [2024-07-12 17:09:31.888086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.888095] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.888102] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.931752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.364 [2024-07-12 17:09:31.931771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.364 [2024-07-12 17:09:31.931777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.364 [2024-07-12 17:09:31.931784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c19c0) on tqpair=0x1361540 00:20:32.364 ===================================================== 00:20:32.364 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:32.364 ===================================================== 00:20:32.364 Controller Capabilities/Features 00:20:32.364 ================================ 00:20:32.364 Vendor ID: 0000 00:20:32.364 Subsystem Vendor ID: 0000 00:20:32.365 Serial Number: .................... 00:20:32.365 Model Number: ........................................ 00:20:32.365 Firmware Version: 24.09 00:20:32.365 Recommended Arb Burst: 0 00:20:32.365 IEEE OUI Identifier: 00 00 00 00:20:32.365 Multi-path I/O 00:20:32.365 May have multiple subsystem ports: No 00:20:32.365 May have multiple controllers: No 00:20:32.365 Associated with SR-IOV VF: No 00:20:32.365 Max Data Transfer Size: 131072 00:20:32.365 Max Number of Namespaces: 0 00:20:32.365 Max Number of I/O Queues: 1024 00:20:32.365 NVMe Specification Version (VS): 1.3 00:20:32.365 NVMe Specification Version (Identify): 1.3 00:20:32.365 Maximum Queue Entries: 128 00:20:32.365 Contiguous Queues Required: Yes 00:20:32.365 Arbitration Mechanisms Supported 00:20:32.365 Weighted Round Robin: Not Supported 00:20:32.365 Vendor Specific: Not Supported 00:20:32.365 Reset Timeout: 15000 ms 00:20:32.365 Doorbell Stride: 4 bytes 00:20:32.365 NVM Subsystem Reset: Not Supported 00:20:32.365 Command Sets Supported 00:20:32.365 NVM Command Set: Supported 00:20:32.365 Boot Partition: Not Supported 00:20:32.365 Memory Page Size Minimum: 4096 bytes 00:20:32.365 Memory Page Size Maximum: 4096 bytes 00:20:32.365 Persistent Memory Region: Not Supported 00:20:32.365 Optional Asynchronous Events Supported 00:20:32.365 Namespace Attribute Notices: Not Supported 00:20:32.365 Firmware Activation Notices: Not Supported 00:20:32.365 ANA Change Notices: Not Supported 00:20:32.365 PLE Aggregate Log Change Notices: Not Supported 00:20:32.365 LBA Status Info Alert Notices: Not Supported 00:20:32.365 EGE Aggregate Log Change Notices: Not Supported 00:20:32.365 Normal NVM Subsystem Shutdown event: Not Supported 00:20:32.365 Zone Descriptor Change Notices: Not Supported 00:20:32.365 Discovery Log Change Notices: Supported 00:20:32.365 Controller Attributes 00:20:32.365 128-bit Host Identifier: Not Supported 00:20:32.365 Non-Operational Permissive Mode: Not Supported 00:20:32.365 NVM Sets: Not Supported 00:20:32.365 Read Recovery Levels: Not Supported 00:20:32.365 Endurance Groups: Not Supported 00:20:32.365 Predictable Latency Mode: Not Supported 00:20:32.365 Traffic Based Keep ALive: Not Supported 00:20:32.365 Namespace Granularity: Not Supported 00:20:32.365 SQ Associations: Not Supported 00:20:32.365 UUID List: Not Supported 00:20:32.365 Multi-Domain Subsystem: Not Supported 00:20:32.365 Fixed Capacity Management: Not Supported 00:20:32.365 Variable Capacity Management: Not Supported 00:20:32.365 Delete Endurance Group: Not Supported 00:20:32.365 Delete NVM Set: Not Supported 00:20:32.365 Extended LBA Formats Supported: Not Supported 00:20:32.365 Flexible Data Placement Supported: Not Supported 00:20:32.365 00:20:32.365 Controller Memory Buffer Support 00:20:32.365 ================================ 00:20:32.365 Supported: No 00:20:32.365 00:20:32.365 Persistent Memory Region Support 00:20:32.365 ================================ 00:20:32.365 Supported: No 00:20:32.365 00:20:32.365 Admin Command Set Attributes 00:20:32.365 ============================ 00:20:32.365 Security Send/Receive: Not Supported 00:20:32.365 Format NVM: Not Supported 00:20:32.365 Firmware Activate/Download: Not Supported 00:20:32.365 Namespace Management: Not Supported 00:20:32.365 Device Self-Test: Not Supported 00:20:32.365 Directives: Not Supported 00:20:32.365 NVMe-MI: Not Supported 00:20:32.365 Virtualization Management: Not Supported 00:20:32.365 Doorbell Buffer Config: Not Supported 00:20:32.365 Get LBA Status Capability: Not Supported 00:20:32.365 Command & Feature Lockdown Capability: Not Supported 00:20:32.365 Abort Command Limit: 1 00:20:32.365 Async Event Request Limit: 4 00:20:32.365 Number of Firmware Slots: N/A 00:20:32.365 Firmware Slot 1 Read-Only: N/A 00:20:32.365 Firmware Activation Without Reset: N/A 00:20:32.365 Multiple Update Detection Support: N/A 00:20:32.365 Firmware Update Granularity: No Information Provided 00:20:32.365 Per-Namespace SMART Log: No 00:20:32.365 Asymmetric Namespace Access Log Page: Not Supported 00:20:32.365 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:32.365 Command Effects Log Page: Not Supported 00:20:32.365 Get Log Page Extended Data: Supported 00:20:32.365 Telemetry Log Pages: Not Supported 00:20:32.365 Persistent Event Log Pages: Not Supported 00:20:32.365 Supported Log Pages Log Page: May Support 00:20:32.365 Commands Supported & Effects Log Page: Not Supported 00:20:32.365 Feature Identifiers & Effects Log Page:May Support 00:20:32.365 NVMe-MI Commands & Effects Log Page: May Support 00:20:32.365 Data Area 4 for Telemetry Log: Not Supported 00:20:32.365 Error Log Page Entries Supported: 128 00:20:32.365 Keep Alive: Not Supported 00:20:32.365 00:20:32.365 NVM Command Set Attributes 00:20:32.365 ========================== 00:20:32.365 Submission Queue Entry Size 00:20:32.365 Max: 1 00:20:32.365 Min: 1 00:20:32.365 Completion Queue Entry Size 00:20:32.365 Max: 1 00:20:32.365 Min: 1 00:20:32.365 Number of Namespaces: 0 00:20:32.365 Compare Command: Not Supported 00:20:32.365 Write Uncorrectable Command: Not Supported 00:20:32.365 Dataset Management Command: Not Supported 00:20:32.365 Write Zeroes Command: Not Supported 00:20:32.365 Set Features Save Field: Not Supported 00:20:32.365 Reservations: Not Supported 00:20:32.365 Timestamp: Not Supported 00:20:32.365 Copy: Not Supported 00:20:32.365 Volatile Write Cache: Not Present 00:20:32.365 Atomic Write Unit (Normal): 1 00:20:32.365 Atomic Write Unit (PFail): 1 00:20:32.365 Atomic Compare & Write Unit: 1 00:20:32.365 Fused Compare & Write: Supported 00:20:32.365 Scatter-Gather List 00:20:32.365 SGL Command Set: Supported 00:20:32.365 SGL Keyed: Supported 00:20:32.365 SGL Bit Bucket Descriptor: Not Supported 00:20:32.365 SGL Metadata Pointer: Not Supported 00:20:32.365 Oversized SGL: Not Supported 00:20:32.365 SGL Metadata Address: Not Supported 00:20:32.365 SGL Offset: Supported 00:20:32.365 Transport SGL Data Block: Not Supported 00:20:32.365 Replay Protected Memory Block: Not Supported 00:20:32.365 00:20:32.365 Firmware Slot Information 00:20:32.365 ========================= 00:20:32.365 Active slot: 0 00:20:32.365 00:20:32.365 00:20:32.365 Error Log 00:20:32.365 ========= 00:20:32.365 00:20:32.365 Active Namespaces 00:20:32.365 ================= 00:20:32.365 Discovery Log Page 00:20:32.365 ================== 00:20:32.365 Generation Counter: 2 00:20:32.365 Number of Records: 2 00:20:32.365 Record Format: 0 00:20:32.365 00:20:32.365 Discovery Log Entry 0 00:20:32.365 ---------------------- 00:20:32.365 Transport Type: 3 (TCP) 00:20:32.365 Address Family: 1 (IPv4) 00:20:32.365 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:32.365 Entry Flags: 00:20:32.365 Duplicate Returned Information: 1 00:20:32.365 Explicit Persistent Connection Support for Discovery: 1 00:20:32.365 Transport Requirements: 00:20:32.365 Secure Channel: Not Required 00:20:32.365 Port ID: 0 (0x0000) 00:20:32.365 Controller ID: 65535 (0xffff) 00:20:32.365 Admin Max SQ Size: 128 00:20:32.365 Transport Service Identifier: 4420 00:20:32.365 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:32.365 Transport Address: 10.0.0.2 00:20:32.365 Discovery Log Entry 1 00:20:32.365 ---------------------- 00:20:32.365 Transport Type: 3 (TCP) 00:20:32.365 Address Family: 1 (IPv4) 00:20:32.365 Subsystem Type: 2 (NVM Subsystem) 00:20:32.365 Entry Flags: 00:20:32.365 Duplicate Returned Information: 0 00:20:32.365 Explicit Persistent Connection Support for Discovery: 0 00:20:32.365 Transport Requirements: 00:20:32.365 Secure Channel: Not Required 00:20:32.365 Port ID: 0 (0x0000) 00:20:32.365 Controller ID: 65535 (0xffff) 00:20:32.365 Admin Max SQ Size: 128 00:20:32.365 Transport Service Identifier: 4420 00:20:32.365 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:32.365 Transport Address: 10.0.0.2 [2024-07-12 17:09:31.931898] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:32.365 [2024-07-12 17:09:31.931920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c13c0) on tqpair=0x1361540 00:20:32.365 [2024-07-12 17:09:31.931933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.365 [2024-07-12 17:09:31.931942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1540) on tqpair=0x1361540 00:20:32.365 [2024-07-12 17:09:31.931949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.365 [2024-07-12 17:09:31.931957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c16c0) on tqpair=0x1361540 00:20:32.365 [2024-07-12 17:09:31.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.365 [2024-07-12 17:09:31.931973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.365 [2024-07-12 17:09:31.931981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.365 [2024-07-12 17:09:31.931998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.365 [2024-07-12 17:09:31.932007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.365 [2024-07-12 17:09:31.932028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.365 [2024-07-12 17:09:31.932039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.932145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.932157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.932163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.932182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.932209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.932327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.932337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.932343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.932358] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:32.366 [2024-07-12 17:09:31.932367] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:32.366 [2024-07-12 17:09:31.932382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.932406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.932521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.932531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.932537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.932559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.932583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.932693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.932706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.932727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.932760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.932786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.932893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.932905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.932911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.932933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.932952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.932962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.932983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.933141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.933161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.933323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.933342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.933489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.933508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.933659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.933678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.933835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.933856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.933955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.933967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.933973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.933979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.933995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.934004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.934010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.934020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.934039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.934130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.366 [2024-07-12 17:09:31.934143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.366 [2024-07-12 17:09:31.934149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.934155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.366 [2024-07-12 17:09:31.934171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.934179] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.366 [2024-07-12 17:09:31.934185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.366 [2024-07-12 17:09:31.934195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.366 [2024-07-12 17:09:31.934215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.366 [2024-07-12 17:09:31.934333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.934344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.934350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.934371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.367 [2024-07-12 17:09:31.934399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:31.934420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.367 [2024-07-12 17:09:31.934501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.934514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.934520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.934541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.367 [2024-07-12 17:09:31.934565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:31.934585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.367 [2024-07-12 17:09:31.934704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.934729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.934742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.934766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.367 [2024-07-12 17:09:31.934792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:31.934812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.367 [2024-07-12 17:09:31.934905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.934916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.934923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.934945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.934960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.367 [2024-07-12 17:09:31.934970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:31.934989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.367 [2024-07-12 17:09:31.935155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.935169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.935175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.935181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.938751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.938767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.938773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1361540) 00:20:32.367 [2024-07-12 17:09:31.938784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:31.938810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c1840, cid 3, qid 0 00:20:32.367 [2024-07-12 17:09:31.938950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:31.938972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:31.938978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:31.938985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c1840) on tqpair=0x1361540 00:20:32.367 [2024-07-12 17:09:31.938998] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:32.367 00:20:32.367 17:09:31 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:32.367 [2024-07-12 17:09:31.971484] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:32.367 [2024-07-12 17:09:31.971530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177417 ] 00:20:32.367 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.367 [2024-07-12 17:09:32.005540] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:32.367 [2024-07-12 17:09:32.005592] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:32.367 [2024-07-12 17:09:32.005602] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:32.367 [2024-07-12 17:09:32.005615] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:32.367 [2024-07-12 17:09:32.005624] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:32.367 [2024-07-12 17:09:32.008786] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:32.367 [2024-07-12 17:09:32.008829] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22e7540 0 00:20:32.367 [2024-07-12 17:09:32.016754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:32.367 [2024-07-12 17:09:32.016774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:32.367 [2024-07-12 17:09:32.016782] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:32.367 [2024-07-12 17:09:32.016788] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:32.367 [2024-07-12 17:09:32.016834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.016845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.016852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.367 [2024-07-12 17:09:32.016877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:32.367 [2024-07-12 17:09:32.016903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.367 [2024-07-12 17:09:32.024750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:32.024768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:32.024782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.024789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.367 [2024-07-12 17:09:32.024803] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:32.367 [2024-07-12 17:09:32.024817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:32.367 [2024-07-12 17:09:32.024827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:32.367 [2024-07-12 17:09:32.024844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.024853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.024859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.367 [2024-07-12 17:09:32.024870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:32.024894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.367 [2024-07-12 17:09:32.025049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:32.025061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:32.025067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.367 [2024-07-12 17:09:32.025096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:32.367 [2024-07-12 17:09:32.025110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:32.367 [2024-07-12 17:09:32.025121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.367 [2024-07-12 17:09:32.025144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:32.025164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.367 [2024-07-12 17:09:32.025285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:32.025298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:32.025304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.367 [2024-07-12 17:09:32.025319] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:32.367 [2024-07-12 17:09:32.025332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:32.367 [2024-07-12 17:09:32.025343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.367 [2024-07-12 17:09:32.025365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.367 [2024-07-12 17:09:32.025385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.367 [2024-07-12 17:09:32.025470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.367 [2024-07-12 17:09:32.025481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.367 [2024-07-12 17:09:32.025487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.367 [2024-07-12 17:09:32.025493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.367 [2024-07-12 17:09:32.025501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:32.367 [2024-07-12 17:09:32.025520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.025529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.025535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.368 [2024-07-12 17:09:32.025545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.368 [2024-07-12 17:09:32.025564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.368 [2024-07-12 17:09:32.025660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.368 [2024-07-12 17:09:32.025671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.368 [2024-07-12 17:09:32.025677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.025683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.368 [2024-07-12 17:09:32.025690] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:32.368 [2024-07-12 17:09:32.025698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:32.368 [2024-07-12 17:09:32.025710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:32.368 [2024-07-12 17:09:32.025835] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:32.368 [2024-07-12 17:09:32.025843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:32.368 [2024-07-12 17:09:32.025855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.025863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.025869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.368 [2024-07-12 17:09:32.025880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.368 [2024-07-12 17:09:32.025901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.368 [2024-07-12 17:09:32.026043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.368 [2024-07-12 17:09:32.026055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.368 [2024-07-12 17:09:32.026061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.368 [2024-07-12 17:09:32.026075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:32.368 [2024-07-12 17:09:32.026106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.368 [2024-07-12 17:09:32.026130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.368 [2024-07-12 17:09:32.026149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.368 [2024-07-12 17:09:32.026247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.368 [2024-07-12 17:09:32.026258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.368 [2024-07-12 17:09:32.026264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.368 [2024-07-12 17:09:32.026277] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:32.368 [2024-07-12 17:09:32.026288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:32.368 [2024-07-12 17:09:32.026301] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:32.368 [2024-07-12 17:09:32.026317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:32.368 [2024-07-12 17:09:32.026330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.368 [2024-07-12 17:09:32.026347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.368 [2024-07-12 17:09:32.026366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.368 [2024-07-12 17:09:32.026503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.368 [2024-07-12 17:09:32.026518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.368 [2024-07-12 17:09:32.026524] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026530] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=4096, cccid=0 00:20:32.368 [2024-07-12 17:09:32.026537] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23473c0) on tqpair(0x22e7540): expected_datao=0, payload_size=4096 00:20:32.368 [2024-07-12 17:09:32.026544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026554] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.368 [2024-07-12 17:09:32.026560] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.629 [2024-07-12 17:09:32.070750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.629 [2024-07-12 17:09:32.070770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.629 [2024-07-12 17:09:32.070777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.629 [2024-07-12 17:09:32.070784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.629 [2024-07-12 17:09:32.070795] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:32.629 [2024-07-12 17:09:32.070808] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:32.629 [2024-07-12 17:09:32.070815] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:32.629 [2024-07-12 17:09:32.070822] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:32.629 [2024-07-12 17:09:32.070829] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:32.629 [2024-07-12 17:09:32.070836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:32.629 [2024-07-12 17:09:32.070850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:32.629 [2024-07-12 17:09:32.070862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.070869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.070875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.070886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.630 [2024-07-12 17:09:32.070909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.630 [2024-07-12 17:09:32.071002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.071015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.071025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.630 [2024-07-12 17:09:32.071042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.630 [2024-07-12 17:09:32.071073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.630 [2024-07-12 17:09:32.071102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.630 [2024-07-12 17:09:32.071131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.630 [2024-07-12 17:09:32.071159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.630 [2024-07-12 17:09:32.071226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23473c0, cid 0, qid 0 00:20:32.630 [2024-07-12 17:09:32.071236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347540, cid 1, qid 0 00:20:32.630 [2024-07-12 17:09:32.071244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23476c0, cid 2, qid 0 00:20:32.630 [2024-07-12 17:09:32.071251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.630 [2024-07-12 17:09:32.071258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.630 [2024-07-12 17:09:32.071437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.071450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.071457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.630 [2024-07-12 17:09:32.071470] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:32.630 [2024-07-12 17:09:32.071482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.630 [2024-07-12 17:09:32.071560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.630 [2024-07-12 17:09:32.071699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.071710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.071716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.630 [2024-07-12 17:09:32.071808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.071845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.071852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.071862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.630 [2024-07-12 17:09:32.071884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.630 [2024-07-12 17:09:32.071992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.630 [2024-07-12 17:09:32.072007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.630 [2024-07-12 17:09:32.072014] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.072020] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=4096, cccid=4 00:20:32.630 [2024-07-12 17:09:32.072027] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23479c0) on tqpair(0x22e7540): expected_datao=0, payload_size=4096 00:20:32.630 [2024-07-12 17:09:32.072034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.072067] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.072076] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.112893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.112911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.112918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.112925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.630 [2024-07-12 17:09:32.112942] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:32.630 [2024-07-12 17:09:32.112960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.112977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.112994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.113002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.113013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.630 [2024-07-12 17:09:32.113037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.630 [2024-07-12 17:09:32.113165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.630 [2024-07-12 17:09:32.113180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.630 [2024-07-12 17:09:32.113186] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.113192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=4096, cccid=4 00:20:32.630 [2024-07-12 17:09:32.113199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23479c0) on tqpair(0x22e7540): expected_datao=0, payload_size=4096 00:20:32.630 [2024-07-12 17:09:32.113206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.113223] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.113231] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.156747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.156765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.156772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.156778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.630 [2024-07-12 17:09:32.156802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.156822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:32.630 [2024-07-12 17:09:32.156836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.156844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.630 [2024-07-12 17:09:32.156855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.630 [2024-07-12 17:09:32.156877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.630 [2024-07-12 17:09:32.157008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.630 [2024-07-12 17:09:32.157037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.630 [2024-07-12 17:09:32.157044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.157049] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=4096, cccid=4 00:20:32.630 [2024-07-12 17:09:32.157057] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23479c0) on tqpair(0x22e7540): expected_datao=0, payload_size=4096 00:20:32.630 [2024-07-12 17:09:32.157064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.157081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.157089] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.197881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.630 [2024-07-12 17:09:32.197899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.630 [2024-07-12 17:09:32.197906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.630 [2024-07-12 17:09:32.197913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.197927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.197947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.197965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.197977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.197986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.197994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.198003] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:32.631 [2024-07-12 17:09:32.198011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:32.631 [2024-07-12 17:09:32.198019] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:32.631 [2024-07-12 17:09:32.198038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.198069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.631 [2024-07-12 17:09:32.198133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.631 [2024-07-12 17:09:32.198145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347b40, cid 5, qid 0 00:20:32.631 [2024-07-12 17:09:32.198279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.198291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.198297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.198313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.198322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.198328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347b40) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.198350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.198388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347b40, cid 5, qid 0 00:20:32.631 [2024-07-12 17:09:32.198519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.198533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.198539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347b40) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.198565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.198604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347b40, cid 5, qid 0 00:20:32.631 [2024-07-12 17:09:32.198734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.198755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.198762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347b40) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.198786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.198826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347b40, cid 5, qid 0 00:20:32.631 [2024-07-12 17:09:32.198920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.198933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.198939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347b40) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.198970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.198981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.198991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.199003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.199035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.199047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.199064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.199075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22e7540) 00:20:32.631 [2024-07-12 17:09:32.199091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.631 [2024-07-12 17:09:32.199112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347b40, cid 5, qid 0 00:20:32.631 [2024-07-12 17:09:32.199122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23479c0, cid 4, qid 0 00:20:32.631 [2024-07-12 17:09:32.199130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347cc0, cid 6, qid 0 00:20:32.631 [2024-07-12 17:09:32.199137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347e40, cid 7, qid 0 00:20:32.631 [2024-07-12 17:09:32.199342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.631 [2024-07-12 17:09:32.199357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.631 [2024-07-12 17:09:32.199363] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199370] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=8192, cccid=5 00:20:32.631 [2024-07-12 17:09:32.199377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2347b40) on tqpair(0x22e7540): expected_datao=0, payload_size=8192 00:20:32.631 [2024-07-12 17:09:32.199384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199404] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.631 [2024-07-12 17:09:32.199435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.631 [2024-07-12 17:09:32.199441] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199447] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=512, cccid=4 00:20:32.631 [2024-07-12 17:09:32.199454] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23479c0) on tqpair(0x22e7540): expected_datao=0, payload_size=512 00:20:32.631 [2024-07-12 17:09:32.199461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199470] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199477] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.631 [2024-07-12 17:09:32.199493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.631 [2024-07-12 17:09:32.199499] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199505] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=512, cccid=6 00:20:32.631 [2024-07-12 17:09:32.199512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2347cc0) on tqpair(0x22e7540): expected_datao=0, payload_size=512 00:20:32.631 [2024-07-12 17:09:32.199519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199528] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199534] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.631 [2024-07-12 17:09:32.199550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.631 [2024-07-12 17:09:32.199557] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199563] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22e7540): datao=0, datal=4096, cccid=7 00:20:32.631 [2024-07-12 17:09:32.199570] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2347e40) on tqpair(0x22e7540): expected_datao=0, payload_size=4096 00:20:32.631 [2024-07-12 17:09:32.199577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199586] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199592] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.199613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.199619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347b40) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.199643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.199653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.199659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.631 [2024-07-12 17:09:32.199668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23479c0) on tqpair=0x22e7540 00:20:32.631 [2024-07-12 17:09:32.199683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.631 [2024-07-12 17:09:32.199693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.631 [2024-07-12 17:09:32.199699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.632 [2024-07-12 17:09:32.199706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347cc0) on tqpair=0x22e7540 00:20:32.632 [2024-07-12 17:09:32.199715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.632 [2024-07-12 17:09:32.199749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.632 [2024-07-12 17:09:32.199757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.632 [2024-07-12 17:09:32.199764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347e40) on tqpair=0x22e7540 00:20:32.632 ===================================================== 00:20:32.632 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.632 ===================================================== 00:20:32.632 Controller Capabilities/Features 00:20:32.632 ================================ 00:20:32.632 Vendor ID: 8086 00:20:32.632 Subsystem Vendor ID: 8086 00:20:32.632 Serial Number: SPDK00000000000001 00:20:32.632 Model Number: SPDK bdev Controller 00:20:32.632 Firmware Version: 24.09 00:20:32.632 Recommended Arb Burst: 6 00:20:32.632 IEEE OUI Identifier: e4 d2 5c 00:20:32.632 Multi-path I/O 00:20:32.632 May have multiple subsystem ports: Yes 00:20:32.632 May have multiple controllers: Yes 00:20:32.632 Associated with SR-IOV VF: No 00:20:32.632 Max Data Transfer Size: 131072 00:20:32.632 Max Number of Namespaces: 32 00:20:32.632 Max Number of I/O Queues: 127 00:20:32.632 NVMe Specification Version (VS): 1.3 00:20:32.632 NVMe Specification Version (Identify): 1.3 00:20:32.632 Maximum Queue Entries: 128 00:20:32.632 Contiguous Queues Required: Yes 00:20:32.632 Arbitration Mechanisms Supported 00:20:32.632 Weighted Round Robin: Not Supported 00:20:32.632 Vendor Specific: Not Supported 00:20:32.632 Reset Timeout: 15000 ms 00:20:32.632 Doorbell Stride: 4 bytes 00:20:32.632 NVM Subsystem Reset: Not Supported 00:20:32.632 Command Sets Supported 00:20:32.632 NVM Command Set: Supported 00:20:32.632 Boot Partition: Not Supported 00:20:32.632 Memory Page Size Minimum: 4096 bytes 00:20:32.632 Memory Page Size Maximum: 4096 bytes 00:20:32.632 Persistent Memory Region: Not Supported 00:20:32.632 Optional Asynchronous Events Supported 00:20:32.632 Namespace Attribute Notices: Supported 00:20:32.632 Firmware Activation Notices: Not Supported 00:20:32.632 ANA Change Notices: Not Supported 00:20:32.632 PLE Aggregate Log Change Notices: Not Supported 00:20:32.632 LBA Status Info Alert Notices: Not Supported 00:20:32.632 EGE Aggregate Log Change Notices: Not Supported 00:20:32.632 Normal NVM Subsystem Shutdown event: Not Supported 00:20:32.632 Zone Descriptor Change Notices: Not Supported 00:20:32.632 Discovery Log Change Notices: Not Supported 00:20:32.632 Controller Attributes 00:20:32.632 128-bit Host Identifier: Supported 00:20:32.632 Non-Operational Permissive Mode: Not Supported 00:20:32.632 NVM Sets: Not Supported 00:20:32.632 Read Recovery Levels: Not Supported 00:20:32.632 Endurance Groups: Not Supported 00:20:32.632 Predictable Latency Mode: Not Supported 00:20:32.632 Traffic Based Keep ALive: Not Supported 00:20:32.632 Namespace Granularity: Not Supported 00:20:32.632 SQ Associations: Not Supported 00:20:32.632 UUID List: Not Supported 00:20:32.632 Multi-Domain Subsystem: Not Supported 00:20:32.632 Fixed Capacity Management: Not Supported 00:20:32.632 Variable Capacity Management: Not Supported 00:20:32.632 Delete Endurance Group: Not Supported 00:20:32.632 Delete NVM Set: Not Supported 00:20:32.632 Extended LBA Formats Supported: Not Supported 00:20:32.632 Flexible Data Placement Supported: Not Supported 00:20:32.632 00:20:32.632 Controller Memory Buffer Support 00:20:32.632 ================================ 00:20:32.632 Supported: No 00:20:32.632 00:20:32.632 Persistent Memory Region Support 00:20:32.632 ================================ 00:20:32.632 Supported: No 00:20:32.632 00:20:32.632 Admin Command Set Attributes 00:20:32.632 ============================ 00:20:32.632 Security Send/Receive: Not Supported 00:20:32.632 Format NVM: Not Supported 00:20:32.632 Firmware Activate/Download: Not Supported 00:20:32.632 Namespace Management: Not Supported 00:20:32.632 Device Self-Test: Not Supported 00:20:32.632 Directives: Not Supported 00:20:32.632 NVMe-MI: Not Supported 00:20:32.632 Virtualization Management: Not Supported 00:20:32.632 Doorbell Buffer Config: Not Supported 00:20:32.632 Get LBA Status Capability: Not Supported 00:20:32.632 Command & Feature Lockdown Capability: Not Supported 00:20:32.632 Abort Command Limit: 4 00:20:32.632 Async Event Request Limit: 4 00:20:32.632 Number of Firmware Slots: N/A 00:20:32.632 Firmware Slot 1 Read-Only: N/A 00:20:32.632 Firmware Activation Without Reset: N/A 00:20:32.632 Multiple Update Detection Support: N/A 00:20:32.632 Firmware Update Granularity: No Information Provided 00:20:32.632 Per-Namespace SMART Log: No 00:20:32.632 Asymmetric Namespace Access Log Page: Not Supported 00:20:32.632 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:32.632 Command Effects Log Page: Supported 00:20:32.632 Get Log Page Extended Data: Supported 00:20:32.632 Telemetry Log Pages: Not Supported 00:20:32.632 Persistent Event Log Pages: Not Supported 00:20:32.632 Supported Log Pages Log Page: May Support 00:20:32.632 Commands Supported & Effects Log Page: Not Supported 00:20:32.632 Feature Identifiers & Effects Log Page:May Support 00:20:32.632 NVMe-MI Commands & Effects Log Page: May Support 00:20:32.632 Data Area 4 for Telemetry Log: Not Supported 00:20:32.632 Error Log Page Entries Supported: 128 00:20:32.632 Keep Alive: Supported 00:20:32.632 Keep Alive Granularity: 10000 ms 00:20:32.632 00:20:32.632 NVM Command Set Attributes 00:20:32.632 ========================== 00:20:32.632 Submission Queue Entry Size 00:20:32.632 Max: 64 00:20:32.632 Min: 64 00:20:32.632 Completion Queue Entry Size 00:20:32.632 Max: 16 00:20:32.632 Min: 16 00:20:32.632 Number of Namespaces: 32 00:20:32.632 Compare Command: Supported 00:20:32.632 Write Uncorrectable Command: Not Supported 00:20:32.632 Dataset Management Command: Supported 00:20:32.632 Write Zeroes Command: Supported 00:20:32.632 Set Features Save Field: Not Supported 00:20:32.632 Reservations: Supported 00:20:32.632 Timestamp: Not Supported 00:20:32.632 Copy: Supported 00:20:32.632 Volatile Write Cache: Present 00:20:32.632 Atomic Write Unit (Normal): 1 00:20:32.632 Atomic Write Unit (PFail): 1 00:20:32.632 Atomic Compare & Write Unit: 1 00:20:32.632 Fused Compare & Write: Supported 00:20:32.632 Scatter-Gather List 00:20:32.632 SGL Command Set: Supported 00:20:32.632 SGL Keyed: Supported 00:20:32.632 SGL Bit Bucket Descriptor: Not Supported 00:20:32.632 SGL Metadata Pointer: Not Supported 00:20:32.632 Oversized SGL: Not Supported 00:20:32.632 SGL Metadata Address: Not Supported 00:20:32.632 SGL Offset: Supported 00:20:32.632 Transport SGL Data Block: Not Supported 00:20:32.632 Replay Protected Memory Block: Not Supported 00:20:32.632 00:20:32.632 Firmware Slot Information 00:20:32.632 ========================= 00:20:32.632 Active slot: 1 00:20:32.632 Slot 1 Firmware Revision: 24.09 00:20:32.632 00:20:32.632 00:20:32.632 Commands Supported and Effects 00:20:32.632 ============================== 00:20:32.632 Admin Commands 00:20:32.632 -------------- 00:20:32.632 Get Log Page (02h): Supported 00:20:32.632 Identify (06h): Supported 00:20:32.632 Abort (08h): Supported 00:20:32.632 Set Features (09h): Supported 00:20:32.632 Get Features (0Ah): Supported 00:20:32.632 Asynchronous Event Request (0Ch): Supported 00:20:32.632 Keep Alive (18h): Supported 00:20:32.632 I/O Commands 00:20:32.632 ------------ 00:20:32.632 Flush (00h): Supported LBA-Change 00:20:32.632 Write (01h): Supported LBA-Change 00:20:32.632 Read (02h): Supported 00:20:32.632 Compare (05h): Supported 00:20:32.632 Write Zeroes (08h): Supported LBA-Change 00:20:32.632 Dataset Management (09h): Supported LBA-Change 00:20:32.632 Copy (19h): Supported LBA-Change 00:20:32.632 00:20:32.632 Error Log 00:20:32.632 ========= 00:20:32.632 00:20:32.632 Arbitration 00:20:32.632 =========== 00:20:32.632 Arbitration Burst: 1 00:20:32.632 00:20:32.632 Power Management 00:20:32.632 ================ 00:20:32.632 Number of Power States: 1 00:20:32.632 Current Power State: Power State #0 00:20:32.632 Power State #0: 00:20:32.632 Max Power: 0.00 W 00:20:32.632 Non-Operational State: Operational 00:20:32.632 Entry Latency: Not Reported 00:20:32.632 Exit Latency: Not Reported 00:20:32.632 Relative Read Throughput: 0 00:20:32.632 Relative Read Latency: 0 00:20:32.632 Relative Write Throughput: 0 00:20:32.632 Relative Write Latency: 0 00:20:32.632 Idle Power: Not Reported 00:20:32.632 Active Power: Not Reported 00:20:32.632 Non-Operational Permissive Mode: Not Supported 00:20:32.632 00:20:32.632 Health Information 00:20:32.632 ================== 00:20:32.632 Critical Warnings: 00:20:32.632 Available Spare Space: OK 00:20:32.632 Temperature: OK 00:20:32.632 Device Reliability: OK 00:20:32.632 Read Only: No 00:20:32.632 Volatile Memory Backup: OK 00:20:32.632 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:32.632 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:32.632 Available Spare: 0% 00:20:32.632 Available Spare Threshold: 0% 00:20:32.632 Life Percentage Used:[2024-07-12 17:09:32.199880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.199891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.199902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.199925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347e40, cid 7, qid 0 00:20:32.633 [2024-07-12 17:09:32.200065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.200078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.200084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347e40) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200135] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:32.633 [2024-07-12 17:09:32.200154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23473c0) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.633 [2024-07-12 17:09:32.200172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347540) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.633 [2024-07-12 17:09:32.200188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23476c0) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.633 [2024-07-12 17:09:32.200203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.633 [2024-07-12 17:09:32.200222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.200246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.200267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.200402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.200416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.200422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.200467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.200493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.200590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.200604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.200610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200623] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:32.633 [2024-07-12 17:09:32.200631] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:32.633 [2024-07-12 17:09:32.200647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.200671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.200691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.200805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.200820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.200827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.200850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.200866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.200876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.200897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.201003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.201015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.201036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.201060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.201085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.201105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.201206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.201220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.201229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.201252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.201277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.201297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.201379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.201392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.201398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.201420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.201445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.201465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.201557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.201568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.201574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.201596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.201611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.201621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.201640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.205747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.205764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.205770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.205777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.205794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.205804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.205810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22e7540) 00:20:32.633 [2024-07-12 17:09:32.205820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.633 [2024-07-12 17:09:32.205841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2347840, cid 3, qid 0 00:20:32.633 [2024-07-12 17:09:32.205975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.633 [2024-07-12 17:09:32.205989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.633 [2024-07-12 17:09:32.205995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.633 [2024-07-12 17:09:32.206005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2347840) on tqpair=0x22e7540 00:20:32.633 [2024-07-12 17:09:32.206019] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:32.633 0% 00:20:32.633 Data Units Read: 0 00:20:32.633 Data Units Written: 0 00:20:32.633 Host Read Commands: 0 00:20:32.633 Host Write Commands: 0 00:20:32.633 Controller Busy Time: 0 minutes 00:20:32.633 Power Cycles: 0 00:20:32.633 Power On Hours: 0 hours 00:20:32.633 Unsafe Shutdowns: 0 00:20:32.633 Unrecoverable Media Errors: 0 00:20:32.633 Lifetime Error Log Entries: 0 00:20:32.633 Warning Temperature Time: 0 minutes 00:20:32.633 Critical Temperature Time: 0 minutes 00:20:32.633 00:20:32.633 Number of Queues 00:20:32.633 ================ 00:20:32.633 Number of I/O Submission Queues: 127 00:20:32.633 Number of I/O Completion Queues: 127 00:20:32.633 00:20:32.633 Active Namespaces 00:20:32.633 ================= 00:20:32.633 Namespace ID:1 00:20:32.633 Error Recovery Timeout: Unlimited 00:20:32.634 Command Set Identifier: NVM (00h) 00:20:32.634 Deallocate: Supported 00:20:32.634 Deallocated/Unwritten Error: Not Supported 00:20:32.634 Deallocated Read Value: Unknown 00:20:32.634 Deallocate in Write Zeroes: Not Supported 00:20:32.634 Deallocated Guard Field: 0xFFFF 00:20:32.634 Flush: Supported 00:20:32.634 Reservation: Supported 00:20:32.634 Namespace Sharing Capabilities: Multiple Controllers 00:20:32.634 Size (in LBAs): 131072 (0GiB) 00:20:32.634 Capacity (in LBAs): 131072 (0GiB) 00:20:32.634 Utilization (in LBAs): 131072 (0GiB) 00:20:32.634 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:32.634 EUI64: ABCDEF0123456789 00:20:32.634 UUID: 36be0a36-b32f-4700-9fc1-5a029f4ab3cf 00:20:32.634 Thin Provisioning: Not Supported 00:20:32.634 Per-NS Atomic Units: Yes 00:20:32.634 Atomic Boundary Size (Normal): 0 00:20:32.634 Atomic Boundary Size (PFail): 0 00:20:32.634 Atomic Boundary Offset: 0 00:20:32.634 Maximum Single Source Range Length: 65535 00:20:32.634 Maximum Copy Length: 65535 00:20:32.634 Maximum Source Range Count: 1 00:20:32.634 NGUID/EUI64 Never Reused: No 00:20:32.634 Namespace Write Protected: No 00:20:32.634 Number of LBA Formats: 1 00:20:32.634 Current LBA Format: LBA Format #00 00:20:32.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:32.634 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.634 rmmod nvme_tcp 00:20:32.634 rmmod nvme_fabrics 00:20:32.634 rmmod nvme_keyring 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1177277 ']' 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1177277 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1177277 ']' 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1177277 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1177277 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:32.634 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1177277' 00:20:32.634 killing process with pid 1177277 00:20:32.892 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1177277 00:20:32.892 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1177277 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.151 17:09:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.051 17:09:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.051 00:20:35.051 real 0m5.772s 00:20:35.051 user 0m5.186s 00:20:35.051 sys 0m1.984s 00:20:35.051 17:09:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.051 17:09:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:35.051 ************************************ 00:20:35.051 END TEST nvmf_identify 00:20:35.051 ************************************ 00:20:35.051 17:09:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.051 17:09:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:35.051 17:09:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.051 17:09:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.051 17:09:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.051 ************************************ 00:20:35.051 START TEST nvmf_perf 00:20:35.051 ************************************ 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:35.051 * Looking for test storage... 00:20:35.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.051 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.309 17:09:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:37.209 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:37.209 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:37.209 Found net devices under 0000:84:00.0: cvl_0_0 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:37.209 Found net devices under 0000:84:00.1: cvl_0_1 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.209 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.466 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.466 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.466 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.466 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.466 17:09:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:20:37.466 00:20:37.466 --- 10.0.0.2 ping statistics --- 00:20:37.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.466 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:37.466 00:20:37.466 --- 10.0.0.1 ping statistics --- 00:20:37.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.466 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1179369 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1179369 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1179369 ']' 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.466 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.466 [2024-07-12 17:09:37.099515] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:37.466 [2024-07-12 17:09:37.099600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.466 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.723 [2024-07-12 17:09:37.163610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.723 [2024-07-12 17:09:37.269403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.723 [2024-07-12 17:09:37.269457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.723 [2024-07-12 17:09:37.269484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.723 [2024-07-12 17:09:37.269500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.723 [2024-07-12 17:09:37.269510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.723 [2024-07-12 17:09:37.269596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.723 [2024-07-12 17:09:37.269697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.723 [2024-07-12 17:09:37.269785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.723 [2024-07-12 17:09:37.269789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:37.723 17:09:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:40.997 17:09:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:40.997 17:09:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:41.254 17:09:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:20:41.254 17:09:40 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.513 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:41.513 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:20:41.513 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:41.513 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:41.513 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.770 [2024-07-12 17:09:41.260014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.770 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.027 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:42.027 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.284 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:42.284 17:09:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:42.541 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.797 [2024-07-12 17:09:42.243650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.797 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.055 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:20:43.055 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:43.055 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:43.055 17:09:42 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:44.425 Initializing NVMe Controllers 00:20:44.425 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:20:44.425 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:20:44.425 Initialization complete. Launching workers. 00:20:44.425 ======================================================== 00:20:44.425 Latency(us) 00:20:44.425 Device Information : IOPS MiB/s Average min max 00:20:44.425 PCIE (0000:82:00.0) NSID 1 from core 0: 85421.30 333.68 374.14 43.10 5268.18 00:20:44.425 ======================================================== 00:20:44.425 Total : 85421.30 333.68 374.14 43.10 5268.18 00:20:44.425 00:20:44.425 17:09:43 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.425 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.355 Initializing NVMe Controllers 00:20:45.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:45.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:45.355 Initialization complete. Launching workers. 00:20:45.355 ======================================================== 00:20:45.355 Latency(us) 00:20:45.355 Device Information : IOPS MiB/s Average min max 00:20:45.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 117.00 0.46 8858.16 137.99 45826.35 00:20:45.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17955.44 7946.01 47906.80 00:20:45.356 ======================================================== 00:20:45.356 Total : 173.00 0.68 11802.95 137.99 47906.80 00:20:45.356 00:20:45.356 17:09:44 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.356 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.725 Initializing NVMe Controllers 00:20:46.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:46.725 Initialization complete. Launching workers. 00:20:46.725 ======================================================== 00:20:46.725 Latency(us) 00:20:46.725 Device Information : IOPS MiB/s Average min max 00:20:46.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8565.41 33.46 3736.94 573.95 9549.53 00:20:46.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3899.73 15.23 8232.66 4287.08 16202.16 00:20:46.725 ======================================================== 00:20:46.725 Total : 12465.14 48.69 5143.43 573.95 16202.16 00:20:46.725 00:20:46.725 17:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:46.725 17:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:46.725 17:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.725 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.253 Initializing NVMe Controllers 00:20:49.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.253 Controller IO queue size 128, less than required. 00:20:49.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.253 Controller IO queue size 128, less than required. 00:20:49.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:49.253 Initialization complete. Launching workers. 00:20:49.253 ======================================================== 00:20:49.253 Latency(us) 00:20:49.253 Device Information : IOPS MiB/s Average min max 00:20:49.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1508.50 377.12 86993.65 61450.02 142374.98 00:20:49.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.11 146.03 226802.71 127329.55 302030.43 00:20:49.253 ======================================================== 00:20:49.253 Total : 2092.61 523.15 126018.67 61450.02 302030.43 00:20:49.253 00:20:49.253 17:09:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:49.253 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.511 No valid NVMe controllers or AIO or URING devices found 00:20:49.511 Initializing NVMe Controllers 00:20:49.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.511 Controller IO queue size 128, less than required. 00:20:49.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:49.511 Controller IO queue size 128, less than required. 00:20:49.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:49.511 WARNING: Some requested NVMe devices were skipped 00:20:49.511 17:09:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:49.511 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.040 Initializing NVMe Controllers 00:20:52.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.040 Controller IO queue size 128, less than required. 00:20:52.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.040 Controller IO queue size 128, less than required. 00:20:52.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:52.040 Initialization complete. Launching workers. 00:20:52.040 00:20:52.040 ==================== 00:20:52.040 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:52.040 TCP transport: 00:20:52.040 polls: 8692 00:20:52.040 idle_polls: 5800 00:20:52.040 sock_completions: 2892 00:20:52.040 nvme_completions: 5301 00:20:52.040 submitted_requests: 7978 00:20:52.040 queued_requests: 1 00:20:52.040 00:20:52.040 ==================== 00:20:52.040 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:52.040 TCP transport: 00:20:52.040 polls: 8520 00:20:52.040 idle_polls: 5460 00:20:52.040 sock_completions: 3060 00:20:52.040 nvme_completions: 5559 00:20:52.040 submitted_requests: 8294 00:20:52.040 queued_requests: 1 00:20:52.040 ======================================================== 00:20:52.040 Latency(us) 00:20:52.040 Device Information : IOPS MiB/s Average min max 00:20:52.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1323.32 330.83 100271.74 61766.14 164287.10 00:20:52.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1387.73 346.93 93243.37 53166.31 129367.14 00:20:52.040 ======================================================== 00:20:52.040 Total : 2711.05 677.76 96674.05 53166.31 164287.10 00:20:52.040 00:20:52.040 17:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:52.040 17:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.298 rmmod nvme_tcp 00:20:52.298 rmmod nvme_fabrics 00:20:52.298 rmmod nvme_keyring 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1179369 ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1179369 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1179369 ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1179369 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1179369 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1179369' 00:20:52.298 killing process with pid 1179369 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1179369 00:20:52.298 17:09:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1179369 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.196 17:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.177 17:09:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:56.177 00:20:56.177 real 0m20.876s 00:20:56.177 user 1m3.589s 00:20:56.177 sys 0m5.635s 00:20:56.177 17:09:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.177 17:09:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 ************************************ 00:20:56.177 END TEST nvmf_perf 00:20:56.177 ************************************ 00:20:56.177 17:09:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:56.177 17:09:55 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:56.177 17:09:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:56.177 17:09:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.177 17:09:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 ************************************ 00:20:56.177 START TEST nvmf_fio_host 00:20:56.177 ************************************ 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:56.177 * Looking for test storage... 00:20:56.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.177 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.178 17:09:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:58.081 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:58.081 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:58.081 Found net devices under 0000:84:00.0: cvl_0_0 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:58.081 Found net devices under 0000:84:00.1: cvl_0_1 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.081 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.082 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:58.340 00:20:58.340 --- 10.0.0.2 ping statistics --- 00:20:58.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.340 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:58.340 00:20:58.340 --- 10.0.0.1 ping statistics --- 00:20:58.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.340 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1183225 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1183225 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1183225 ']' 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.340 17:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.340 [2024-07-12 17:09:57.934280] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:20:58.340 [2024-07-12 17:09:57.934355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.340 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.340 [2024-07-12 17:09:58.004866] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.598 [2024-07-12 17:09:58.113133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.598 [2024-07-12 17:09:58.113190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.598 [2024-07-12 17:09:58.113218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.598 [2024-07-12 17:09:58.113229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.598 [2024-07-12 17:09:58.113239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.598 [2024-07-12 17:09:58.113329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.598 [2024-07-12 17:09:58.113388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.598 [2024-07-12 17:09:58.113497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.598 [2024-07-12 17:09:58.113501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.598 17:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.598 17:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:58.598 17:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.855 [2024-07-12 17:09:58.502442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.855 17:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:58.855 17:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.855 17:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.855 17:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:59.112 Malloc1 00:20:59.368 17:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.368 17:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.634 17:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.893 [2024-07-12 17:09:59.516353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.893 17:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:00.149 17:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:00.406 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:00.406 fio-3.35 00:21:00.406 Starting 1 thread 00:21:00.406 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.927 00:21:02.927 test: (groupid=0, jobs=1): err= 0: pid=1183597: Fri Jul 12 17:10:02 2024 00:21:02.927 read: IOPS=9168, BW=35.8MiB/s (37.6MB/s)(71.8MiB/2006msec) 00:21:02.927 slat (usec): min=2, max=133, avg= 3.08, stdev= 1.97 00:21:02.927 clat (usec): min=2391, max=12393, avg=7619.62, stdev=612.01 00:21:02.927 lat (usec): min=2413, max=12396, avg=7622.69, stdev=611.92 00:21:02.927 clat percentiles (usec): 00:21:02.927 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:21:02.927 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:21:02.927 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:21:02.927 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11338], 99.95th=[11863], 00:21:02.927 | 99.99th=[12387] 00:21:02.927 bw ( KiB/s): min=35760, max=37152, per=99.91%, avg=36642.00, stdev=609.49, samples=4 00:21:02.927 iops : min= 8940, max= 9288, avg=9160.50, stdev=152.37, samples=4 00:21:02.927 write: IOPS=9177, BW=35.8MiB/s (37.6MB/s)(71.9MiB/2006msec); 0 zone resets 00:21:02.927 slat (nsec): min=2465, max=98653, avg=3296.34, stdev=1961.71 00:21:02.927 clat (usec): min=1126, max=11472, avg=6286.11, stdev=517.16 00:21:02.927 lat (usec): min=1133, max=11475, avg=6289.41, stdev=517.10 00:21:02.927 clat percentiles (usec): 00:21:02.927 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 00:21:02.927 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:21:02.927 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7046], 00:21:02.927 | 99.00th=[ 7439], 99.50th=[ 7701], 99.90th=[ 9503], 99.95th=[10421], 00:21:02.927 | 99.99th=[11338] 00:21:02.927 bw ( KiB/s): min=36432, max=36928, per=99.99%, avg=36708.00, stdev=230.15, samples=4 00:21:02.927 iops : min= 9108, max= 9232, avg=9177.00, stdev=57.54, samples=4 00:21:02.927 lat (msec) : 2=0.03%, 4=0.11%, 10=99.74%, 20=0.12% 00:21:02.927 cpu : usr=69.58%, sys=28.73%, ctx=51, majf=0, minf=40 00:21:02.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:02.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.927 issued rwts: total=18392,18410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.927 00:21:02.927 Run status group 0 (all jobs): 00:21:02.927 READ: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.8MiB (75.3MB), run=2006-2006msec 00:21:02.927 WRITE: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:21:02.927 17:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.927 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.927 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:02.928 17:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:02.928 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:02.928 fio-3.35 00:21:02.928 Starting 1 thread 00:21:02.928 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.452 00:21:05.452 test: (groupid=0, jobs=1): err= 0: pid=1184031: Fri Jul 12 17:10:04 2024 00:21:05.452 read: IOPS=8153, BW=127MiB/s (134MB/s)(256MiB/2006msec) 00:21:05.452 slat (usec): min=2, max=138, avg= 4.59, stdev= 2.98 00:21:05.452 clat (usec): min=2214, max=18222, avg=9115.22, stdev=2164.61 00:21:05.452 lat (usec): min=2218, max=18226, avg=9119.81, stdev=2164.67 00:21:05.452 clat percentiles (usec): 00:21:05.452 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:21:05.452 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:21:05.452 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12125], 95.00th=[12911], 00:21:05.452 | 99.00th=[14615], 99.50th=[15401], 99.90th=[17171], 99.95th=[17957], 00:21:05.452 | 99.99th=[18220] 00:21:05.452 bw ( KiB/s): min=59520, max=78560, per=51.97%, avg=67800.00, stdev=9636.57, samples=4 00:21:05.452 iops : min= 3720, max= 4910, avg=4237.50, stdev=602.29, samples=4 00:21:05.452 write: IOPS=4940, BW=77.2MiB/s (80.9MB/s)(139MiB/1803msec); 0 zone resets 00:21:05.452 slat (usec): min=30, max=176, avg=39.80, stdev= 8.82 00:21:05.452 clat (usec): min=5674, max=18724, avg=11593.65, stdev=1734.39 00:21:05.452 lat (usec): min=5710, max=18775, avg=11633.45, stdev=1734.80 00:21:05.452 clat percentiles (usec): 00:21:05.452 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10159], 00:21:05.452 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:21:05.452 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14746], 00:21:05.452 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:21:05.452 | 99.99th=[18744] 00:21:05.452 bw ( KiB/s): min=62272, max=81216, per=89.40%, avg=70672.00, stdev=9761.82, samples=4 00:21:05.452 iops : min= 3892, max= 5076, avg=4417.00, stdev=610.11, samples=4 00:21:05.452 lat (msec) : 4=0.14%, 10=48.97%, 20=50.89% 00:21:05.452 cpu : usr=78.30%, sys=18.30%, ctx=72, majf=0, minf=70 00:21:05.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:05.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.452 issued rwts: total=16356,8908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.452 00:21:05.452 Run status group 0 (all jobs): 00:21:05.452 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=256MiB (268MB), run=2006-2006msec 00:21:05.453 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=139MiB (146MB), run=1803-1803msec 00:21:05.453 17:10:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.453 rmmod nvme_tcp 00:21:05.453 rmmod nvme_fabrics 00:21:05.453 rmmod nvme_keyring 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1183225 ']' 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1183225 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1183225 ']' 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1183225 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:05.453 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1183225 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1183225' 00:21:05.711 killing process with pid 1183225 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1183225 00:21:05.711 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1183225 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.970 17:10:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.876 17:10:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.876 00:21:07.876 real 0m11.906s 00:21:07.876 user 0m34.937s 00:21:07.876 sys 0m3.758s 00:21:07.876 17:10:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.876 17:10:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.876 ************************************ 00:21:07.876 END TEST nvmf_fio_host 00:21:07.876 ************************************ 00:21:07.876 17:10:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:07.876 17:10:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:07.876 17:10:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:07.876 17:10:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.876 17:10:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.876 ************************************ 00:21:07.876 START TEST nvmf_failover 00:21:07.876 ************************************ 00:21:07.876 17:10:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.134 * Looking for test storage... 00:21:08.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.134 17:10:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:10.039 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:10.039 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:10.039 Found net devices under 0000:84:00.0: cvl_0_0 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:10.039 Found net devices under 0000:84:00.1: cvl_0_1 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.039 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:21:10.039 00:21:10.039 --- 10.0.0.2 ping statistics --- 00:21:10.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.039 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:10.040 00:21:10.040 --- 10.0.0.1 ping statistics --- 00:21:10.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.040 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.040 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1186239 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1186239 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1186239 ']' 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.298 17:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.298 [2024-07-12 17:10:09.802206] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:21:10.298 [2024-07-12 17:10:09.802300] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.298 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.298 [2024-07-12 17:10:09.871849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.298 [2024-07-12 17:10:09.983878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.298 [2024-07-12 17:10:09.983943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.298 [2024-07-12 17:10:09.983971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.298 [2024-07-12 17:10:09.983983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.298 [2024-07-12 17:10:09.983993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.298 [2024-07-12 17:10:09.984042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.298 [2024-07-12 17:10:09.984103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.298 [2024-07-12 17:10:09.984108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.556 17:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.813 [2024-07-12 17:10:10.361328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.813 17:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:11.070 Malloc0 00:21:11.070 17:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.328 17:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.585 17:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.843 [2024-07-12 17:10:11.427796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.843 17:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:12.100 [2024-07-12 17:10:11.684637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:12.100 17:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.357 [2024-07-12 17:10:11.981546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1186528 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1186528 /var/tmp/bdevperf.sock 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1186528 ']' 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.358 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.924 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.924 17:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:12.924 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.182 NVMe0n1 00:21:13.182 17:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.450 00:21:13.450 17:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1186665 00:21:13.450 17:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.450 17:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:14.825 17:10:14 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.825 [2024-07-12 17:10:14.427764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.427987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.825 [2024-07-12 17:10:14.428849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 [2024-07-12 17:10:14.428983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b550 is same with the state(5) to be set 00:21:14.826 17:10:14 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:18.106 17:10:17 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.363 00:21:18.363 17:10:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.621 17:10:18 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:21.907 17:10:21 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.907 [2024-07-12 17:10:21.459869] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.907 17:10:21 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:22.840 17:10:22 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:23.098 [2024-07-12 17:10:22.733320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 [2024-07-12 17:10:22.733508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45ef0 is same with the state(5) to be set 00:21:23.098 17:10:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1186665 00:21:29.756 0 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1186528 ']' 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186528' 00:21:29.756 killing process with pid 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1186528 00:21:29.756 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:29.756 [2024-07-12 17:10:12.046329] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:21:29.756 [2024-07-12 17:10:12.046409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186528 ] 00:21:29.756 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.756 [2024-07-12 17:10:12.106782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.756 [2024-07-12 17:10:12.215651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.756 Running I/O for 15 seconds... 00:21:29.756 [2024-07-12 17:10:14.430167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-07-12 17:10:14.430957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.756 [2024-07-12 17:10:14.430974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.756 [2024-07-12 17:10:14.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.431239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.431964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.431979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.431992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.757 [2024-07-12 17:10:14.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.432208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.432269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.757 [2024-07-12 17:10:14.432284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.757 [2024-07-12 17:10:14.432297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.432981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.432995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.433049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.433080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.433108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.433136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.758 [2024-07-12 17:10:14.433164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:21:29.758 [2024-07-12 17:10:14.433600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.758 [2024-07-12 17:10:14.433613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.758 [2024-07-12 17:10:14.433625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.758 [2024-07-12 17:10:14.433644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86448 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.433954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86464 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.433966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.433980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.433991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86472 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86480 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86488 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86496 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86504 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86512 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86528 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86536 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86544 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86552 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86560 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86568 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86576 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.759 [2024-07-12 17:10:14.434793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.759 [2024-07-12 17:10:14.434805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86600 len:8 PRP1 0x0 PRP2 0x0 00:21:29.759 [2024-07-12 17:10:14.434817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434882] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb2c40 was disconnected and freed. reset controller. 00:21:29.759 [2024-07-12 17:10:14.434900] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.759 [2024-07-12 17:10:14.434950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.759 [2024-07-12 17:10:14.434968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.759 [2024-07-12 17:10:14.434991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.760 [2024-07-12 17:10:14.435005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:14.435019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.760 [2024-07-12 17:10:14.435032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:14.435046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.760 [2024-07-12 17:10:14.435059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:14.435072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.760 [2024-07-12 17:10:14.438314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.760 [2024-07-12 17:10:14.438351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8c790 (9): Bad file descriptor 00:21:29.760 [2024-07-12 17:10:14.468770] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.760 [2024-07-12 17:10:18.158458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.158976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.158990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.760 [2024-07-12 17:10:18.159283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.760 [2024-07-12 17:10:18.159482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.760 [2024-07-12 17:10:18.159497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.159970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.159986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.761 [2024-07-12 17:10:18.160836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.761 [2024-07-12 17:10:18.160851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.160864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.160880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.160894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.160910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.160924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.160939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.160953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.160968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.160997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.762 [2024-07-12 17:10:18.161929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.161963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.762 [2024-07-12 17:10:18.161981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:21:29.762 [2024-07-12 17:10:18.161994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.162013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.762 [2024-07-12 17:10:18.162025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.762 [2024-07-12 17:10:18.162042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:21:29.762 [2024-07-12 17:10:18.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.162069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.762 [2024-07-12 17:10:18.162080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.762 [2024-07-12 17:10:18.162091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:21:29.762 [2024-07-12 17:10:18.162104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.162117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.762 [2024-07-12 17:10:18.162128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.762 [2024-07-12 17:10:18.162139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:21:29.762 [2024-07-12 17:10:18.162152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.762 [2024-07-12 17:10:18.162165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.762 [2024-07-12 17:10:18.162176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.762 [2024-07-12 17:10:18.162187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.763 [2024-07-12 17:10:18.162834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.763 [2024-07-12 17:10:18.162846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:21:29.763 [2024-07-12 17:10:18.162860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.162922] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21572f0 was disconnected and freed. reset controller. 00:21:29.763 [2024-07-12 17:10:18.162940] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:29.763 [2024-07-12 17:10:18.162975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.763 [2024-07-12 17:10:18.162994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.163010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.763 [2024-07-12 17:10:18.163033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.163048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.763 [2024-07-12 17:10:18.163062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.163076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.763 [2024-07-12 17:10:18.163089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:18.163103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.763 [2024-07-12 17:10:18.166346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.763 [2024-07-12 17:10:18.166385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8c790 (9): Bad file descriptor 00:21:29.763 [2024-07-12 17:10:18.240017] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.763 [2024-07-12 17:10:22.733628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.733983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.733999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.734013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.734029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.734054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.734086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.734100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.734123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.763 [2024-07-12 17:10:22.734136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.763 [2024-07-12 17:10:22.734151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.734971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.734987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-07-12 17:10:22.735300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.764 [2024-07-12 17:10:22.735314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.735976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.735990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.736029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.736059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-07-12 17:10:22.736539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-07-12 17:10:22.736614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-07-12 17:10:22.736627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.736971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.736986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-07-12 17:10:22.737656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbc750 is same with the state(5) to be set 00:21:29.766 [2024-07-12 17:10:22.737692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.766 [2024-07-12 17:10:22.737706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.766 [2024-07-12 17:10:22.737729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45152 len:8 PRP1 0x0 PRP2 0x0 00:21:29.766 [2024-07-12 17:10:22.737750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737814] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fbc750 was disconnected and freed. reset controller. 00:21:29.766 [2024-07-12 17:10:22.737833] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:29.766 [2024-07-12 17:10:22.737870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.766 [2024-07-12 17:10:22.737898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.766 [2024-07-12 17:10:22.737927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.766 [2024-07-12 17:10:22.737954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.766 [2024-07-12 17:10:22.737981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-07-12 17:10:22.737994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.766 [2024-07-12 17:10:22.741240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.766 [2024-07-12 17:10:22.741280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8c790 (9): Bad file descriptor 00:21:29.766 [2024-07-12 17:10:22.785352] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.766 00:21:29.766 Latency(us) 00:21:29.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.766 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.767 Verification LBA range: start 0x0 length 0x4000 00:21:29.767 NVMe0n1 : 15.01 8815.93 34.44 373.54 0.00 13901.61 558.27 15825.73 00:21:29.767 =================================================================================================================== 00:21:29.767 Total : 8815.93 34.44 373.54 0.00 13901.61 558.27 15825.73 00:21:29.767 Received shutdown signal, test time was about 15.000000 seconds 00:21:29.767 00:21:29.767 Latency(us) 00:21:29.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.767 =================================================================================================================== 00:21:29.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1188505 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1188505 /var/tmp/bdevperf.sock 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1188505 ']' 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:29.767 17:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:29.767 [2024-07-12 17:10:29.176250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.767 17:10:29 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:29.767 [2024-07-12 17:10:29.437049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:30.024 17:10:29 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.281 NVMe0n1 00:21:30.281 17:10:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.849 00:21:30.849 17:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.108 00:21:31.108 17:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.108 17:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:31.366 17:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.623 17:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:34.905 17:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.905 17:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:34.905 17:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1189175 00:21:34.906 17:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.906 17:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1189175 00:21:36.281 0 00:21:36.281 17:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:36.281 [2024-07-12 17:10:28.618911] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:21:36.281 [2024-07-12 17:10:28.619002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188505 ] 00:21:36.281 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.281 [2024-07-12 17:10:28.680581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.281 [2024-07-12 17:10:28.787874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.281 [2024-07-12 17:10:31.179503] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:36.281 [2024-07-12 17:10:31.179602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.281 [2024-07-12 17:10:31.179625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.281 [2024-07-12 17:10:31.179641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.281 [2024-07-12 17:10:31.179654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.281 [2024-07-12 17:10:31.179671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.281 [2024-07-12 17:10:31.179684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.281 [2024-07-12 17:10:31.179698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.281 [2024-07-12 17:10:31.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.281 [2024-07-12 17:10:31.179753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.281 [2024-07-12 17:10:31.179798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.281 [2024-07-12 17:10:31.179829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f9790 (9): Bad file descriptor 00:21:36.281 [2024-07-12 17:10:31.233266] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.281 Running I/O for 1 seconds... 00:21:36.281 00:21:36.281 Latency(us) 00:21:36.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.281 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:36.281 Verification LBA range: start 0x0 length 0x4000 00:21:36.281 NVMe0n1 : 1.01 8946.59 34.95 0.00 0.00 14246.73 3070.48 11942.12 00:21:36.281 =================================================================================================================== 00:21:36.281 Total : 8946.59 34.95 0.00 0.00 14246.73 3070.48 11942.12 00:21:36.281 17:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.281 17:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:36.281 17:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.539 17:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.539 17:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:36.797 17:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.055 17:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:40.339 17:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.339 17:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1188505 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1188505 ']' 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1188505 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.339 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1188505 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1188505' 00:21:40.595 killing process with pid 1188505 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1188505 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1188505 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:40.595 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.853 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.853 rmmod nvme_tcp 00:21:41.112 rmmod nvme_fabrics 00:21:41.112 rmmod nvme_keyring 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1186239 ']' 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1186239 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1186239 ']' 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1186239 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186239 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186239' 00:21:41.112 killing process with pid 1186239 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1186239 00:21:41.112 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1186239 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.370 17:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.276 17:10:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.276 00:21:43.276 real 0m35.357s 00:21:43.276 user 2m4.781s 00:21:43.276 sys 0m6.256s 00:21:43.276 17:10:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.276 17:10:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:43.276 ************************************ 00:21:43.276 END TEST nvmf_failover 00:21:43.276 ************************************ 00:21:43.276 17:10:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:43.276 17:10:42 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:43.276 17:10:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.276 17:10:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.276 17:10:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:43.534 ************************************ 00:21:43.534 START TEST nvmf_host_discovery 00:21:43.534 ************************************ 00:21:43.534 17:10:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:43.534 * Looking for test storage... 00:21:43.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.534 17:10:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.535 17:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:46.062 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:46.062 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:46.062 Found net devices under 0000:84:00.0: cvl_0_0 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.062 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:46.063 Found net devices under 0000:84:00.1: cvl_0_1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:21:46.063 00:21:46.063 --- 10.0.0.2 ping statistics --- 00:21:46.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.063 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:21:46.063 00:21:46.063 --- 10.0.0.1 ping statistics --- 00:21:46.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.063 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1191798 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1191798 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1191798 ']' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 [2024-07-12 17:10:45.368107] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:21:46.063 [2024-07-12 17:10:45.368180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.063 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.063 [2024-07-12 17:10:45.431843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.063 [2024-07-12 17:10:45.542468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.063 [2024-07-12 17:10:45.542520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.063 [2024-07-12 17:10:45.542534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.063 [2024-07-12 17:10:45.542545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.063 [2024-07-12 17:10:45.542554] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.063 [2024-07-12 17:10:45.542579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 [2024-07-12 17:10:45.685783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 [2024-07-12 17:10:45.693925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 null0 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 null1 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1191935 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1191935 /tmp/host.sock 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1191935 ']' 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:46.063 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.063 17:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.322 [2024-07-12 17:10:45.770989] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:21:46.322 [2024-07-12 17:10:45.771072] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191935 ] 00:21:46.322 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.322 [2024-07-12 17:10:45.831776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.322 [2024-07-12 17:10:45.941141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.581 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 [2024-07-12 17:10:46.367770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.838 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.096 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:47.096 17:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:47.665 [2024-07-12 17:10:47.083181] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:47.665 [2024-07-12 17:10:47.083209] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:47.665 [2024-07-12 17:10:47.083232] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.665 [2024-07-12 17:10:47.169480] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:47.923 [2024-07-12 17:10:47.390686] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.923 [2024-07-12 17:10:47.390744] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.923 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.182 [2024-07-12 17:10:47.827843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:48.182 [2024-07-12 17:10:47.828639] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.182 [2024-07-12 17:10:47.828676] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.182 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.183 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.440 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.441 [2024-07-12 17:10:47.916360] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:48.441 17:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:48.699 [2024-07-12 17:10:48.178584] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.699 [2024-07-12 17:10:48.178608] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.699 [2024-07-12 17:10:48.178617] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.633 17:10:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.633 [2024-07-12 17:10:49.051914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.633 [2024-07-12 17:10:49.051957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-12 17:10:49.051986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.633 [2024-07-12 17:10:49.052000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-12 17:10:49.052031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.633 [2024-07-12 17:10:49.052047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-12 17:10:49.052061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.633 [2024-07-12 17:10:49.052073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-12 17:10:49.052101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.633 [2024-07-12 17:10:49.052343] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:49.633 [2024-07-12 17:10:49.052368] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.633 [2024-07-12 17:10:49.061908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.633 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.633 [2024-07-12 17:10:49.071949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.633 [2024-07-12 17:10:49.072240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.633 [2024-07-12 17:10:49.072268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.072285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.072306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.072326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.072339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.072353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.072377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 [2024-07-12 17:10:49.082053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.082322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.082348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.082363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.082384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.082402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.082415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.082427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.082445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 [2024-07-12 17:10:49.092121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.092304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.092330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.092344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.092364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.092382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.092395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.092407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.092424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.634 [2024-07-12 17:10:49.102869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.103069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.103111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.103127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.103152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.103171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.103184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.103197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.103215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 [2024-07-12 17:10:49.112950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.113215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.113243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.113259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.113281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.113301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.113314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.113327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.113345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 [2024-07-12 17:10:49.123034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.123242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.123271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.123286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 [2024-07-12 17:10:49.123307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.123326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.123338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.123351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.123369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.634 [2024-07-12 17:10:49.133117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.634 [2024-07-12 17:10:49.133308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.634 [2024-07-12 17:10:49.133334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f75210 with addr=10.0.0.2, port=4420 00:21:49.634 [2024-07-12 17:10:49.133349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f75210 is same with the state(5) to be set 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.634 [2024-07-12 17:10:49.133374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75210 (9): Bad file descriptor 00:21:49.634 [2024-07-12 17:10:49.133394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:49.634 [2024-07-12 17:10:49.133406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:49.634 [2024-07-12 17:10:49.133418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:49.634 [2024-07-12 17:10:49.133434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.634 [2024-07-12 17:10:49.137989] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:49.634 [2024-07-12 17:10:49.138018] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:49.634 17:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:50.569 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.829 17:10:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.766 [2024-07-12 17:10:51.455462] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.766 [2024-07-12 17:10:51.455513] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.766 [2024-07-12 17:10:51.455538] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.026 [2024-07-12 17:10:51.583926] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:52.285 [2024-07-12 17:10:51.892914] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.285 [2024-07-12 17:10:51.892962] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.285 request: 00:21:52.285 { 00:21:52.285 "name": "nvme", 00:21:52.285 "trtype": "tcp", 00:21:52.285 "traddr": "10.0.0.2", 00:21:52.285 "adrfam": "ipv4", 00:21:52.285 "trsvcid": "8009", 00:21:52.285 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.285 "wait_for_attach": true, 00:21:52.285 "method": "bdev_nvme_start_discovery", 00:21:52.285 "req_id": 1 00:21:52.285 } 00:21:52.285 Got JSON-RPC error response 00:21:52.285 response: 00:21:52.285 { 00:21:52.285 "code": -17, 00:21:52.285 "message": "File exists" 00:21:52.285 } 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.285 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.543 17:10:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.543 request: 00:21:52.543 { 00:21:52.543 "name": "nvme_second", 00:21:52.543 "trtype": "tcp", 00:21:52.543 "traddr": "10.0.0.2", 00:21:52.543 "adrfam": "ipv4", 00:21:52.543 "trsvcid": "8009", 00:21:52.543 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.543 "wait_for_attach": true, 00:21:52.543 "method": "bdev_nvme_start_discovery", 00:21:52.543 "req_id": 1 00:21:52.543 } 00:21:52.543 Got JSON-RPC error response 00:21:52.543 response: 00:21:52.543 { 00:21:52.543 "code": -17, 00:21:52.543 "message": "File exists" 00:21:52.543 } 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.543 17:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.478 [2024-07-12 17:10:53.112398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.478 [2024-07-12 17:10:53.112452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2156b40 with addr=10.0.0.2, port=8010 00:21:53.478 [2024-07-12 17:10:53.112473] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.478 [2024-07-12 17:10:53.112496] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.478 [2024-07-12 17:10:53.112509] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.470 [2024-07-12 17:10:54.114834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.470 [2024-07-12 17:10:54.114889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2156b40 with addr=10.0.0.2, port=8010 00:21:54.470 [2024-07-12 17:10:54.114924] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:54.470 [2024-07-12 17:10:54.114937] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:54.470 [2024-07-12 17:10:54.114949] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.849 [2024-07-12 17:10:55.117015] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:55.849 request: 00:21:55.849 { 00:21:55.849 "name": "nvme_second", 00:21:55.849 "trtype": "tcp", 00:21:55.849 "traddr": "10.0.0.2", 00:21:55.849 "adrfam": "ipv4", 00:21:55.849 "trsvcid": "8010", 00:21:55.849 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.849 "wait_for_attach": false, 00:21:55.849 "attach_timeout_ms": 3000, 00:21:55.849 "method": "bdev_nvme_start_discovery", 00:21:55.849 "req_id": 1 00:21:55.849 } 00:21:55.849 Got JSON-RPC error response 00:21:55.849 response: 00:21:55.849 { 00:21:55.849 "code": -110, 00:21:55.849 "message": "Connection timed out" 00:21:55.849 } 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.849 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1191935 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.850 rmmod nvme_tcp 00:21:55.850 rmmod nvme_fabrics 00:21:55.850 rmmod nvme_keyring 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1191798 ']' 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1191798 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1191798 ']' 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1191798 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191798 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191798' 00:21:55.850 killing process with pid 1191798 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1191798 00:21:55.850 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1191798 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.110 17:10:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.012 17:10:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.012 00:21:58.012 real 0m14.623s 00:21:58.012 user 0m21.645s 00:21:58.012 sys 0m2.996s 00:21:58.012 17:10:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.012 17:10:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.012 ************************************ 00:21:58.012 END TEST nvmf_host_discovery 00:21:58.012 ************************************ 00:21:58.012 17:10:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:58.012 17:10:57 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:58.012 17:10:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.012 17:10:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.012 17:10:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.013 ************************************ 00:21:58.013 START TEST nvmf_host_multipath_status 00:21:58.013 ************************************ 00:21:58.013 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:58.013 * Looking for test storage... 00:21:58.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.271 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.272 17:10:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.803 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:00.804 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:00.804 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:00.804 Found net devices under 0000:84:00.0: cvl_0_0 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:00.804 Found net devices under 0000:84:00.1: cvl_0_1 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.804 17:10:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:22:00.804 00:22:00.804 --- 10.0.0.2 ping statistics --- 00:22:00.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.804 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:00.804 00:22:00.804 --- 10.0.0.1 ping statistics --- 00:22:00.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.804 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1195134 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1195134 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1195134 ']' 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.804 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.804 [2024-07-12 17:11:00.138989] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:22:00.804 [2024-07-12 17:11:00.139066] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.805 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.805 [2024-07-12 17:11:00.202669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.805 [2024-07-12 17:11:00.319228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.805 [2024-07-12 17:11:00.319304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.805 [2024-07-12 17:11:00.319335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.805 [2024-07-12 17:11:00.319347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.805 [2024-07-12 17:11:00.319357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.805 [2024-07-12 17:11:00.319795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.805 [2024-07-12 17:11:00.319801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1195134 00:22:00.805 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:01.063 [2024-07-12 17:11:00.708462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.063 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:01.321 Malloc0 00:22:01.321 17:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:01.887 17:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.887 17:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.145 [2024-07-12 17:11:01.808913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.145 17:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.403 [2024-07-12 17:11:02.053508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1195419 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1195419 /var/tmp/bdevperf.sock 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1195419 ']' 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.403 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:02.972 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.972 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:02.972 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.972 17:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:03.538 Nvme0n1 00:22:03.538 17:11:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:03.796 Nvme0n1 00:22:03.796 17:11:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:03.796 17:11:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.325 17:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:06.325 17:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:06.325 17:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.585 17:11:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:07.523 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:07.523 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:07.523 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.523 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.781 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.781 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:07.781 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.781 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.038 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.038 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.038 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.038 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.296 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.296 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.296 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.296 17:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.553 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.553 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.553 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.553 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.810 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.810 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.810 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.810 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.067 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.067 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:09.067 17:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.324 17:11:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.890 17:11:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:10.825 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:10.825 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:10.825 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.825 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.082 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.082 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:11.082 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.082 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:11.351 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.351 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:11.351 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.351 17:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.607 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.607 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.607 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.607 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.864 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.864 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.864 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.864 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.120 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.120 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:12.121 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.121 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.377 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.377 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:12.377 17:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:12.633 17:11:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:12.890 17:11:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:14.260 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.261 17:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.519 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.519 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.519 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.519 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.777 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.777 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.777 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.777 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.035 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.035 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.035 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.035 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.292 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.292 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.292 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.292 17:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.550 17:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.550 17:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:15.550 17:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.117 17:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.117 17:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:17.490 17:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:17.490 17:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.490 17:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.490 17:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.490 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.490 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.490 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.490 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.749 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.749 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.749 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.749 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.006 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.006 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.006 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.006 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.264 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.264 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.264 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.264 17:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.522 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.522 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.522 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.522 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.090 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.090 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:19.090 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:19.090 17:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:19.388 17:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:20.342 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:20.342 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.342 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.342 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.599 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.599 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.599 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.599 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.857 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.857 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.857 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.857 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.114 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.114 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.114 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.114 17:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.372 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.372 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.372 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.372 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.630 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.630 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:21.630 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.630 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.887 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.887 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:21.887 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:22.145 17:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:22.403 17:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:23.338 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:23.338 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.338 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.338 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.597 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.597 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:23.855 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.855 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.115 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.115 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.115 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.115 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.373 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.373 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.373 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.373 17:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.630 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.631 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:24.631 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.631 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.888 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.888 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.888 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.888 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.145 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.145 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:25.403 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:25.403 17:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:25.661 17:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:25.920 17:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:26.853 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:26.853 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:26.853 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.853 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.421 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.421 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.421 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.421 17:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.421 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.421 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.421 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.421 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.987 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.987 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.987 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.987 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.987 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.988 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.988 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.988 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.553 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.553 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.553 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.553 17:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.553 17:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.553 17:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:28.553 17:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.119 17:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:29.378 17:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:30.312 17:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:30.312 17:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:30.312 17:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.312 17:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.569 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.569 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.569 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.569 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.827 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.827 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.827 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.827 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.084 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.084 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.084 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.085 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.342 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.342 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.342 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.342 17:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.599 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.599 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.599 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.599 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.857 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.857 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:31.857 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.114 17:11:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:32.680 17:11:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:33.613 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:33.613 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.613 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.613 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.870 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.870 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.870 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.870 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.126 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.126 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.126 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.126 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.383 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.383 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.383 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.383 17:11:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.640 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.640 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.640 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.640 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.896 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.896 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.896 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.896 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.154 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.154 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:35.154 17:11:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:35.411 17:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:35.669 17:11:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.044 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.302 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.302 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.302 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.302 17:11:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.559 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.559 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.559 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.559 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.817 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.817 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.817 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.817 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.074 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.074 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:38.074 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.074 17:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1195419 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1195419 ']' 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1195419 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195419 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195419' 00:22:38.642 killing process with pid 1195419 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1195419 00:22:38.642 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1195419 00:22:38.642 Connection closed with partial response: 00:22:38.642 00:22:38.642 00:22:38.904 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1195419 00:22:38.904 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:38.904 [2024-07-12 17:11:02.116509] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:22:38.904 [2024-07-12 17:11:02.116594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195419 ] 00:22:38.904 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.904 [2024-07-12 17:11:02.177253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.904 [2024-07-12 17:11:02.284527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.904 Running I/O for 90 seconds... 00:22:38.904 [2024-07-12 17:11:18.749600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.749680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.904 [2024-07-12 17:11:18.749798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.749842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.749884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.749924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.749965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.749988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.750961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.750984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.904 [2024-07-12 17:11:18.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:38.904 [2024-07-12 17:11:18.751023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.751960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.751977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.752963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.752981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.905 [2024-07-12 17:11:18.753251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:38.905 [2024-07-12 17:11:18.753275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.753619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.753888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.753906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.906 [2024-07-12 17:11:18.754491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.754961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.754979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.906 [2024-07-12 17:11:18.755348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.906 [2024-07-12 17:11:18.755365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:18.755976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:18.755993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.337117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.337218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.337260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.337307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.337346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.337641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.337657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.338041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.338093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.338132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.907 [2024-07-12 17:11:35.338649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:38.907 [2024-07-12 17:11:35.338671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.907 [2024-07-12 17:11:35.338687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.338709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.338729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.338778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.338801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.338817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.338839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.338856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.339382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.339982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.339998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.908 [2024-07-12 17:11:35.340224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:38.908 [2024-07-12 17:11:35.340364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.908 [2024-07-12 17:11:35.340380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.909 [2024-07-12 17:11:35.340402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.909 [2024-07-12 17:11:35.340420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.909 Received shutdown signal, test time was about 34.433885 seconds 00:22:38.909 00:22:38.909 Latency(us) 00:22:38.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.909 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:38.909 Verification LBA range: start 0x0 length 0x4000 00:22:38.909 Nvme0n1 : 34.43 8557.67 33.43 0.00 0.00 14932.73 976.97 4026531.84 00:22:38.909 =================================================================================================================== 00:22:38.909 Total : 8557.67 33.43 0.00 0.00 14932.73 976.97 4026531.84 00:22:38.909 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.167 rmmod nvme_tcp 00:22:39.167 rmmod nvme_fabrics 00:22:39.167 rmmod nvme_keyring 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1195134 ']' 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1195134 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1195134 ']' 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1195134 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195134 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195134' 00:22:39.167 killing process with pid 1195134 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1195134 00:22:39.167 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1195134 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.425 17:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.330 17:11:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.330 00:22:41.330 real 0m43.359s 00:22:41.330 user 2m10.550s 00:22:41.330 sys 0m12.058s 00:22:41.330 17:11:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.330 17:11:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:41.330 ************************************ 00:22:41.330 END TEST nvmf_host_multipath_status 00:22:41.330 ************************************ 00:22:41.587 17:11:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.587 17:11:41 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.587 17:11:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:41.587 17:11:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.587 17:11:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.587 ************************************ 00:22:41.587 START TEST nvmf_discovery_remove_ifc 00:22:41.587 ************************************ 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.587 * Looking for test storage... 00:22:41.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:41.587 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.588 17:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:44.132 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:44.132 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:44.132 Found net devices under 0000:84:00.0: cvl_0_0 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:44.132 Found net devices under 0000:84:00.1: cvl_0_1 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:22:44.132 00:22:44.132 --- 10.0.0.2 ping statistics --- 00:22:44.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.132 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:22:44.132 00:22:44.132 --- 10.0.0.1 ping statistics --- 00:22:44.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.132 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.132 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1201885 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1201885 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1201885 ']' 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 [2024-07-12 17:11:43.428013] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:22:44.133 [2024-07-12 17:11:43.428125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.133 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.133 [2024-07-12 17:11:43.496254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.133 [2024-07-12 17:11:43.606484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.133 [2024-07-12 17:11:43.606559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.133 [2024-07-12 17:11:43.606587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.133 [2024-07-12 17:11:43.606599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.133 [2024-07-12 17:11:43.606609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.133 [2024-07-12 17:11:43.606641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 [2024-07-12 17:11:43.756686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.133 [2024-07-12 17:11:43.764889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:44.133 null0 00:22:44.133 [2024-07-12 17:11:43.796860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1201914 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1201914 /tmp/host.sock 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1201914 ']' 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:44.133 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.133 17:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.390 [2024-07-12 17:11:43.866167] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:22:44.390 [2024-07-12 17:11:43.866252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201914 ] 00:22:44.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.390 [2024-07-12 17:11:43.925410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.390 [2024-07-12 17:11:44.034335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.390 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.647 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.647 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:44.647 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.647 17:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.578 [2024-07-12 17:11:45.183943] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.578 [2024-07-12 17:11:45.183989] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.578 [2024-07-12 17:11:45.184013] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.835 [2024-07-12 17:11:45.311421] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:45.835 [2024-07-12 17:11:45.375702] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.835 [2024-07-12 17:11:45.375793] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.835 [2024-07-12 17:11:45.375849] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.835 [2024-07-12 17:11:45.375872] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.835 [2024-07-12 17:11:45.375907] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.835 [2024-07-12 17:11:45.382491] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbd3e00 was disconnected and freed. delete nvme_qpair. 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.835 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.836 17:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.205 17:11:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:48.137 17:11:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.069 17:11:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.000 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.001 17:11:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.371 17:11:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.371 [2024-07-12 17:11:50.817076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:51.371 [2024-07-12 17:11:50.817163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.371 [2024-07-12 17:11:50.817186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.371 [2024-07-12 17:11:50.817204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.371 [2024-07-12 17:11:50.817217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.371 [2024-07-12 17:11:50.817230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.371 [2024-07-12 17:11:50.817243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.371 [2024-07-12 17:11:50.817255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.371 [2024-07-12 17:11:50.817267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.371 [2024-07-12 17:11:50.817280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.371 [2024-07-12 17:11:50.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.371 [2024-07-12 17:11:50.817305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9a870 is same with the state(5) to be set 00:22:51.371 [2024-07-12 17:11:50.827108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9a870 (9): Bad file descriptor 00:22:51.371 [2024-07-12 17:11:50.837153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.302 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.302 [2024-07-12 17:11:51.879797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:52.302 [2024-07-12 17:11:51.879861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9a870 with addr=10.0.0.2, port=4420 00:22:52.303 [2024-07-12 17:11:51.879885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9a870 is same with the state(5) to be set 00:22:52.303 [2024-07-12 17:11:51.879925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9a870 (9): Bad file descriptor 00:22:52.303 [2024-07-12 17:11:51.880357] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:52.303 [2024-07-12 17:11:51.880394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:52.303 [2024-07-12 17:11:51.880410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:52.303 [2024-07-12 17:11:51.880426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:52.303 [2024-07-12 17:11:51.880463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.303 [2024-07-12 17:11:51.880480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.303 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.303 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.303 17:11:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.234 [2024-07-12 17:11:52.882966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:53.234 [2024-07-12 17:11:52.882994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:53.234 [2024-07-12 17:11:52.883023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:53.234 [2024-07-12 17:11:52.883036] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:53.234 [2024-07-12 17:11:52.883055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.234 [2024-07-12 17:11:52.883105] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:53.234 [2024-07-12 17:11:52.883142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.234 [2024-07-12 17:11:52.883163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.234 [2024-07-12 17:11:52.883182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.234 [2024-07-12 17:11:52.883194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.234 [2024-07-12 17:11:52.883207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.234 [2024-07-12 17:11:52.883219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.234 [2024-07-12 17:11:52.883232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.234 [2024-07-12 17:11:52.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.234 [2024-07-12 17:11:52.883256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.234 [2024-07-12 17:11:52.883268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.234 [2024-07-12 17:11:52.883281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:53.234 [2024-07-12 17:11:52.883375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb99cf0 (9): Bad file descriptor 00:22:53.234 [2024-07-12 17:11:52.884399] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:53.234 [2024-07-12 17:11:52.884420] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.234 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.491 17:11:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.491 17:11:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:53.491 17:11:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:54.421 17:11:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.351 [2024-07-12 17:11:54.939894] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:55.351 [2024-07-12 17:11:54.939923] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:55.351 [2024-07-12 17:11:54.939948] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:55.351 [2024-07-12 17:11:55.028235] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:55.608 17:11:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.608 [2024-07-12 17:11:55.132321] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:55.608 [2024-07-12 17:11:55.132370] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:55.608 [2024-07-12 17:11:55.132402] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:55.608 [2024-07-12 17:11:55.132424] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:55.608 [2024-07-12 17:11:55.132436] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.608 [2024-07-12 17:11:55.138240] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbdd800 was disconnected and freed. delete nvme_qpair. 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1201914 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1201914 ']' 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1201914 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1201914 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1201914' 00:22:56.540 killing process with pid 1201914 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1201914 00:22:56.540 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1201914 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.797 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.797 rmmod nvme_tcp 00:22:56.797 rmmod nvme_fabrics 00:22:57.054 rmmod nvme_keyring 00:22:57.054 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.054 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:57.054 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:57.054 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1201885 ']' 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1201885 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1201885 ']' 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1201885 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1201885 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1201885' 00:22:57.055 killing process with pid 1201885 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1201885 00:22:57.055 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1201885 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.312 17:11:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.215 17:11:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.215 00:22:59.215 real 0m17.785s 00:22:59.215 user 0m25.547s 00:22:59.215 sys 0m3.144s 00:22:59.215 17:11:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.215 17:11:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.215 ************************************ 00:22:59.215 END TEST nvmf_discovery_remove_ifc 00:22:59.215 ************************************ 00:22:59.215 17:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.215 17:11:58 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:59.215 17:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:59.215 17:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.215 17:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.215 ************************************ 00:22:59.215 START TEST nvmf_identify_kernel_target 00:22:59.215 ************************************ 00:22:59.215 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:59.473 * Looking for test storage... 00:22:59.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.473 17:11:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:01.371 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:01.371 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:01.371 Found net devices under 0000:84:00.0: cvl_0_0 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:01.371 Found net devices under 0000:84:00.1: cvl_0_1 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.371 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.372 17:12:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.372 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.372 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.372 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.372 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:23:01.629 00:23:01.629 --- 10.0.0.2 ping statistics --- 00:23:01.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.629 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:23:01.629 00:23:01.629 --- 10.0.0.1 ping statistics --- 00:23:01.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.629 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:01.629 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:01.630 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:01.630 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:01.630 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:01.630 17:12:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:02.562 Waiting for block devices as requested 00:23:02.562 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:02.819 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:02.819 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:03.077 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:03.078 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:03.078 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:03.350 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:03.350 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:03.350 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:03.350 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:03.615 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:03.615 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:03.615 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:03.615 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:03.872 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:03.872 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:03.872 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:04.130 No valid GPT data, bailing 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:04.130 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:04.130 00:23:04.130 Discovery Log Number of Records 2, Generation counter 2 00:23:04.130 =====Discovery Log Entry 0====== 00:23:04.130 trtype: tcp 00:23:04.130 adrfam: ipv4 00:23:04.130 subtype: current discovery subsystem 00:23:04.130 treq: not specified, sq flow control disable supported 00:23:04.130 portid: 1 00:23:04.130 trsvcid: 4420 00:23:04.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:04.130 traddr: 10.0.0.1 00:23:04.130 eflags: none 00:23:04.130 sectype: none 00:23:04.130 =====Discovery Log Entry 1====== 00:23:04.130 trtype: tcp 00:23:04.130 adrfam: ipv4 00:23:04.131 subtype: nvme subsystem 00:23:04.131 treq: not specified, sq flow control disable supported 00:23:04.131 portid: 1 00:23:04.131 trsvcid: 4420 00:23:04.131 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:04.131 traddr: 10.0.0.1 00:23:04.131 eflags: none 00:23:04.131 sectype: none 00:23:04.131 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:04.131 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:04.131 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.391 ===================================================== 00:23:04.391 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:04.391 ===================================================== 00:23:04.391 Controller Capabilities/Features 00:23:04.391 ================================ 00:23:04.391 Vendor ID: 0000 00:23:04.391 Subsystem Vendor ID: 0000 00:23:04.391 Serial Number: f1d3aedb7877096997ec 00:23:04.391 Model Number: Linux 00:23:04.391 Firmware Version: 6.7.0-68 00:23:04.391 Recommended Arb Burst: 0 00:23:04.391 IEEE OUI Identifier: 00 00 00 00:23:04.391 Multi-path I/O 00:23:04.391 May have multiple subsystem ports: No 00:23:04.391 May have multiple controllers: No 00:23:04.391 Associated with SR-IOV VF: No 00:23:04.391 Max Data Transfer Size: Unlimited 00:23:04.391 Max Number of Namespaces: 0 00:23:04.391 Max Number of I/O Queues: 1024 00:23:04.391 NVMe Specification Version (VS): 1.3 00:23:04.391 NVMe Specification Version (Identify): 1.3 00:23:04.391 Maximum Queue Entries: 1024 00:23:04.391 Contiguous Queues Required: No 00:23:04.391 Arbitration Mechanisms Supported 00:23:04.391 Weighted Round Robin: Not Supported 00:23:04.391 Vendor Specific: Not Supported 00:23:04.391 Reset Timeout: 7500 ms 00:23:04.391 Doorbell Stride: 4 bytes 00:23:04.391 NVM Subsystem Reset: Not Supported 00:23:04.391 Command Sets Supported 00:23:04.391 NVM Command Set: Supported 00:23:04.391 Boot Partition: Not Supported 00:23:04.391 Memory Page Size Minimum: 4096 bytes 00:23:04.391 Memory Page Size Maximum: 4096 bytes 00:23:04.391 Persistent Memory Region: Not Supported 00:23:04.391 Optional Asynchronous Events Supported 00:23:04.391 Namespace Attribute Notices: Not Supported 00:23:04.391 Firmware Activation Notices: Not Supported 00:23:04.391 ANA Change Notices: Not Supported 00:23:04.391 PLE Aggregate Log Change Notices: Not Supported 00:23:04.391 LBA Status Info Alert Notices: Not Supported 00:23:04.391 EGE Aggregate Log Change Notices: Not Supported 00:23:04.391 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.391 Zone Descriptor Change Notices: Not Supported 00:23:04.391 Discovery Log Change Notices: Supported 00:23:04.391 Controller Attributes 00:23:04.391 128-bit Host Identifier: Not Supported 00:23:04.391 Non-Operational Permissive Mode: Not Supported 00:23:04.391 NVM Sets: Not Supported 00:23:04.391 Read Recovery Levels: Not Supported 00:23:04.391 Endurance Groups: Not Supported 00:23:04.391 Predictable Latency Mode: Not Supported 00:23:04.391 Traffic Based Keep ALive: Not Supported 00:23:04.391 Namespace Granularity: Not Supported 00:23:04.391 SQ Associations: Not Supported 00:23:04.391 UUID List: Not Supported 00:23:04.391 Multi-Domain Subsystem: Not Supported 00:23:04.391 Fixed Capacity Management: Not Supported 00:23:04.391 Variable Capacity Management: Not Supported 00:23:04.391 Delete Endurance Group: Not Supported 00:23:04.391 Delete NVM Set: Not Supported 00:23:04.391 Extended LBA Formats Supported: Not Supported 00:23:04.391 Flexible Data Placement Supported: Not Supported 00:23:04.391 00:23:04.391 Controller Memory Buffer Support 00:23:04.391 ================================ 00:23:04.391 Supported: No 00:23:04.391 00:23:04.391 Persistent Memory Region Support 00:23:04.391 ================================ 00:23:04.391 Supported: No 00:23:04.391 00:23:04.391 Admin Command Set Attributes 00:23:04.391 ============================ 00:23:04.391 Security Send/Receive: Not Supported 00:23:04.391 Format NVM: Not Supported 00:23:04.391 Firmware Activate/Download: Not Supported 00:23:04.391 Namespace Management: Not Supported 00:23:04.391 Device Self-Test: Not Supported 00:23:04.391 Directives: Not Supported 00:23:04.391 NVMe-MI: Not Supported 00:23:04.391 Virtualization Management: Not Supported 00:23:04.391 Doorbell Buffer Config: Not Supported 00:23:04.391 Get LBA Status Capability: Not Supported 00:23:04.391 Command & Feature Lockdown Capability: Not Supported 00:23:04.391 Abort Command Limit: 1 00:23:04.391 Async Event Request Limit: 1 00:23:04.391 Number of Firmware Slots: N/A 00:23:04.391 Firmware Slot 1 Read-Only: N/A 00:23:04.391 Firmware Activation Without Reset: N/A 00:23:04.391 Multiple Update Detection Support: N/A 00:23:04.391 Firmware Update Granularity: No Information Provided 00:23:04.391 Per-Namespace SMART Log: No 00:23:04.391 Asymmetric Namespace Access Log Page: Not Supported 00:23:04.391 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:04.391 Command Effects Log Page: Not Supported 00:23:04.391 Get Log Page Extended Data: Supported 00:23:04.391 Telemetry Log Pages: Not Supported 00:23:04.391 Persistent Event Log Pages: Not Supported 00:23:04.391 Supported Log Pages Log Page: May Support 00:23:04.391 Commands Supported & Effects Log Page: Not Supported 00:23:04.391 Feature Identifiers & Effects Log Page:May Support 00:23:04.391 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.391 Data Area 4 for Telemetry Log: Not Supported 00:23:04.391 Error Log Page Entries Supported: 1 00:23:04.391 Keep Alive: Not Supported 00:23:04.391 00:23:04.391 NVM Command Set Attributes 00:23:04.391 ========================== 00:23:04.391 Submission Queue Entry Size 00:23:04.391 Max: 1 00:23:04.391 Min: 1 00:23:04.391 Completion Queue Entry Size 00:23:04.391 Max: 1 00:23:04.391 Min: 1 00:23:04.391 Number of Namespaces: 0 00:23:04.391 Compare Command: Not Supported 00:23:04.391 Write Uncorrectable Command: Not Supported 00:23:04.391 Dataset Management Command: Not Supported 00:23:04.391 Write Zeroes Command: Not Supported 00:23:04.391 Set Features Save Field: Not Supported 00:23:04.391 Reservations: Not Supported 00:23:04.391 Timestamp: Not Supported 00:23:04.391 Copy: Not Supported 00:23:04.391 Volatile Write Cache: Not Present 00:23:04.391 Atomic Write Unit (Normal): 1 00:23:04.391 Atomic Write Unit (PFail): 1 00:23:04.391 Atomic Compare & Write Unit: 1 00:23:04.391 Fused Compare & Write: Not Supported 00:23:04.391 Scatter-Gather List 00:23:04.391 SGL Command Set: Supported 00:23:04.391 SGL Keyed: Not Supported 00:23:04.391 SGL Bit Bucket Descriptor: Not Supported 00:23:04.391 SGL Metadata Pointer: Not Supported 00:23:04.391 Oversized SGL: Not Supported 00:23:04.391 SGL Metadata Address: Not Supported 00:23:04.391 SGL Offset: Supported 00:23:04.391 Transport SGL Data Block: Not Supported 00:23:04.391 Replay Protected Memory Block: Not Supported 00:23:04.391 00:23:04.391 Firmware Slot Information 00:23:04.391 ========================= 00:23:04.391 Active slot: 0 00:23:04.391 00:23:04.392 00:23:04.392 Error Log 00:23:04.392 ========= 00:23:04.392 00:23:04.392 Active Namespaces 00:23:04.392 ================= 00:23:04.392 Discovery Log Page 00:23:04.392 ================== 00:23:04.392 Generation Counter: 2 00:23:04.392 Number of Records: 2 00:23:04.392 Record Format: 0 00:23:04.392 00:23:04.392 Discovery Log Entry 0 00:23:04.392 ---------------------- 00:23:04.392 Transport Type: 3 (TCP) 00:23:04.392 Address Family: 1 (IPv4) 00:23:04.392 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:04.392 Entry Flags: 00:23:04.392 Duplicate Returned Information: 0 00:23:04.392 Explicit Persistent Connection Support for Discovery: 0 00:23:04.392 Transport Requirements: 00:23:04.392 Secure Channel: Not Specified 00:23:04.392 Port ID: 1 (0x0001) 00:23:04.392 Controller ID: 65535 (0xffff) 00:23:04.392 Admin Max SQ Size: 32 00:23:04.392 Transport Service Identifier: 4420 00:23:04.392 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:04.392 Transport Address: 10.0.0.1 00:23:04.392 Discovery Log Entry 1 00:23:04.392 ---------------------- 00:23:04.392 Transport Type: 3 (TCP) 00:23:04.392 Address Family: 1 (IPv4) 00:23:04.392 Subsystem Type: 2 (NVM Subsystem) 00:23:04.392 Entry Flags: 00:23:04.392 Duplicate Returned Information: 0 00:23:04.392 Explicit Persistent Connection Support for Discovery: 0 00:23:04.392 Transport Requirements: 00:23:04.392 Secure Channel: Not Specified 00:23:04.392 Port ID: 1 (0x0001) 00:23:04.392 Controller ID: 65535 (0xffff) 00:23:04.392 Admin Max SQ Size: 32 00:23:04.392 Transport Service Identifier: 4420 00:23:04.392 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:04.392 Transport Address: 10.0.0.1 00:23:04.392 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:04.392 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.392 get_feature(0x01) failed 00:23:04.392 get_feature(0x02) failed 00:23:04.392 get_feature(0x04) failed 00:23:04.392 ===================================================== 00:23:04.392 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:04.392 ===================================================== 00:23:04.392 Controller Capabilities/Features 00:23:04.392 ================================ 00:23:04.392 Vendor ID: 0000 00:23:04.392 Subsystem Vendor ID: 0000 00:23:04.392 Serial Number: 5748b992b987d8b4f460 00:23:04.392 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:04.392 Firmware Version: 6.7.0-68 00:23:04.392 Recommended Arb Burst: 6 00:23:04.392 IEEE OUI Identifier: 00 00 00 00:23:04.392 Multi-path I/O 00:23:04.392 May have multiple subsystem ports: Yes 00:23:04.392 May have multiple controllers: Yes 00:23:04.392 Associated with SR-IOV VF: No 00:23:04.392 Max Data Transfer Size: Unlimited 00:23:04.392 Max Number of Namespaces: 1024 00:23:04.392 Max Number of I/O Queues: 128 00:23:04.392 NVMe Specification Version (VS): 1.3 00:23:04.392 NVMe Specification Version (Identify): 1.3 00:23:04.392 Maximum Queue Entries: 1024 00:23:04.392 Contiguous Queues Required: No 00:23:04.392 Arbitration Mechanisms Supported 00:23:04.392 Weighted Round Robin: Not Supported 00:23:04.392 Vendor Specific: Not Supported 00:23:04.392 Reset Timeout: 7500 ms 00:23:04.392 Doorbell Stride: 4 bytes 00:23:04.392 NVM Subsystem Reset: Not Supported 00:23:04.392 Command Sets Supported 00:23:04.392 NVM Command Set: Supported 00:23:04.392 Boot Partition: Not Supported 00:23:04.392 Memory Page Size Minimum: 4096 bytes 00:23:04.392 Memory Page Size Maximum: 4096 bytes 00:23:04.392 Persistent Memory Region: Not Supported 00:23:04.392 Optional Asynchronous Events Supported 00:23:04.392 Namespace Attribute Notices: Supported 00:23:04.392 Firmware Activation Notices: Not Supported 00:23:04.392 ANA Change Notices: Supported 00:23:04.392 PLE Aggregate Log Change Notices: Not Supported 00:23:04.392 LBA Status Info Alert Notices: Not Supported 00:23:04.392 EGE Aggregate Log Change Notices: Not Supported 00:23:04.392 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.392 Zone Descriptor Change Notices: Not Supported 00:23:04.392 Discovery Log Change Notices: Not Supported 00:23:04.392 Controller Attributes 00:23:04.392 128-bit Host Identifier: Supported 00:23:04.392 Non-Operational Permissive Mode: Not Supported 00:23:04.392 NVM Sets: Not Supported 00:23:04.392 Read Recovery Levels: Not Supported 00:23:04.392 Endurance Groups: Not Supported 00:23:04.392 Predictable Latency Mode: Not Supported 00:23:04.392 Traffic Based Keep ALive: Supported 00:23:04.392 Namespace Granularity: Not Supported 00:23:04.392 SQ Associations: Not Supported 00:23:04.392 UUID List: Not Supported 00:23:04.392 Multi-Domain Subsystem: Not Supported 00:23:04.392 Fixed Capacity Management: Not Supported 00:23:04.392 Variable Capacity Management: Not Supported 00:23:04.392 Delete Endurance Group: Not Supported 00:23:04.392 Delete NVM Set: Not Supported 00:23:04.392 Extended LBA Formats Supported: Not Supported 00:23:04.392 Flexible Data Placement Supported: Not Supported 00:23:04.392 00:23:04.392 Controller Memory Buffer Support 00:23:04.392 ================================ 00:23:04.392 Supported: No 00:23:04.392 00:23:04.392 Persistent Memory Region Support 00:23:04.392 ================================ 00:23:04.392 Supported: No 00:23:04.392 00:23:04.392 Admin Command Set Attributes 00:23:04.392 ============================ 00:23:04.392 Security Send/Receive: Not Supported 00:23:04.392 Format NVM: Not Supported 00:23:04.392 Firmware Activate/Download: Not Supported 00:23:04.392 Namespace Management: Not Supported 00:23:04.392 Device Self-Test: Not Supported 00:23:04.392 Directives: Not Supported 00:23:04.392 NVMe-MI: Not Supported 00:23:04.392 Virtualization Management: Not Supported 00:23:04.392 Doorbell Buffer Config: Not Supported 00:23:04.392 Get LBA Status Capability: Not Supported 00:23:04.392 Command & Feature Lockdown Capability: Not Supported 00:23:04.392 Abort Command Limit: 4 00:23:04.392 Async Event Request Limit: 4 00:23:04.392 Number of Firmware Slots: N/A 00:23:04.392 Firmware Slot 1 Read-Only: N/A 00:23:04.392 Firmware Activation Without Reset: N/A 00:23:04.392 Multiple Update Detection Support: N/A 00:23:04.392 Firmware Update Granularity: No Information Provided 00:23:04.392 Per-Namespace SMART Log: Yes 00:23:04.392 Asymmetric Namespace Access Log Page: Supported 00:23:04.392 ANA Transition Time : 10 sec 00:23:04.392 00:23:04.392 Asymmetric Namespace Access Capabilities 00:23:04.392 ANA Optimized State : Supported 00:23:04.392 ANA Non-Optimized State : Supported 00:23:04.392 ANA Inaccessible State : Supported 00:23:04.392 ANA Persistent Loss State : Supported 00:23:04.392 ANA Change State : Supported 00:23:04.392 ANAGRPID is not changed : No 00:23:04.392 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:04.392 00:23:04.392 ANA Group Identifier Maximum : 128 00:23:04.392 Number of ANA Group Identifiers : 128 00:23:04.392 Max Number of Allowed Namespaces : 1024 00:23:04.392 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:04.392 Command Effects Log Page: Supported 00:23:04.392 Get Log Page Extended Data: Supported 00:23:04.392 Telemetry Log Pages: Not Supported 00:23:04.392 Persistent Event Log Pages: Not Supported 00:23:04.392 Supported Log Pages Log Page: May Support 00:23:04.392 Commands Supported & Effects Log Page: Not Supported 00:23:04.392 Feature Identifiers & Effects Log Page:May Support 00:23:04.392 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.392 Data Area 4 for Telemetry Log: Not Supported 00:23:04.392 Error Log Page Entries Supported: 128 00:23:04.392 Keep Alive: Supported 00:23:04.392 Keep Alive Granularity: 1000 ms 00:23:04.392 00:23:04.392 NVM Command Set Attributes 00:23:04.392 ========================== 00:23:04.392 Submission Queue Entry Size 00:23:04.392 Max: 64 00:23:04.392 Min: 64 00:23:04.392 Completion Queue Entry Size 00:23:04.392 Max: 16 00:23:04.392 Min: 16 00:23:04.392 Number of Namespaces: 1024 00:23:04.392 Compare Command: Not Supported 00:23:04.392 Write Uncorrectable Command: Not Supported 00:23:04.392 Dataset Management Command: Supported 00:23:04.392 Write Zeroes Command: Supported 00:23:04.392 Set Features Save Field: Not Supported 00:23:04.392 Reservations: Not Supported 00:23:04.392 Timestamp: Not Supported 00:23:04.392 Copy: Not Supported 00:23:04.392 Volatile Write Cache: Present 00:23:04.392 Atomic Write Unit (Normal): 1 00:23:04.392 Atomic Write Unit (PFail): 1 00:23:04.392 Atomic Compare & Write Unit: 1 00:23:04.392 Fused Compare & Write: Not Supported 00:23:04.392 Scatter-Gather List 00:23:04.392 SGL Command Set: Supported 00:23:04.392 SGL Keyed: Not Supported 00:23:04.392 SGL Bit Bucket Descriptor: Not Supported 00:23:04.392 SGL Metadata Pointer: Not Supported 00:23:04.392 Oversized SGL: Not Supported 00:23:04.393 SGL Metadata Address: Not Supported 00:23:04.393 SGL Offset: Supported 00:23:04.393 Transport SGL Data Block: Not Supported 00:23:04.393 Replay Protected Memory Block: Not Supported 00:23:04.393 00:23:04.393 Firmware Slot Information 00:23:04.393 ========================= 00:23:04.393 Active slot: 0 00:23:04.393 00:23:04.393 Asymmetric Namespace Access 00:23:04.393 =========================== 00:23:04.393 Change Count : 0 00:23:04.393 Number of ANA Group Descriptors : 1 00:23:04.393 ANA Group Descriptor : 0 00:23:04.393 ANA Group ID : 1 00:23:04.393 Number of NSID Values : 1 00:23:04.393 Change Count : 0 00:23:04.393 ANA State : 1 00:23:04.393 Namespace Identifier : 1 00:23:04.393 00:23:04.393 Commands Supported and Effects 00:23:04.393 ============================== 00:23:04.393 Admin Commands 00:23:04.393 -------------- 00:23:04.393 Get Log Page (02h): Supported 00:23:04.393 Identify (06h): Supported 00:23:04.393 Abort (08h): Supported 00:23:04.393 Set Features (09h): Supported 00:23:04.393 Get Features (0Ah): Supported 00:23:04.393 Asynchronous Event Request (0Ch): Supported 00:23:04.393 Keep Alive (18h): Supported 00:23:04.393 I/O Commands 00:23:04.393 ------------ 00:23:04.393 Flush (00h): Supported 00:23:04.393 Write (01h): Supported LBA-Change 00:23:04.393 Read (02h): Supported 00:23:04.393 Write Zeroes (08h): Supported LBA-Change 00:23:04.393 Dataset Management (09h): Supported 00:23:04.393 00:23:04.393 Error Log 00:23:04.393 ========= 00:23:04.393 Entry: 0 00:23:04.393 Error Count: 0x3 00:23:04.393 Submission Queue Id: 0x0 00:23:04.393 Command Id: 0x5 00:23:04.393 Phase Bit: 0 00:23:04.393 Status Code: 0x2 00:23:04.393 Status Code Type: 0x0 00:23:04.393 Do Not Retry: 1 00:23:04.393 Error Location: 0x28 00:23:04.393 LBA: 0x0 00:23:04.393 Namespace: 0x0 00:23:04.393 Vendor Log Page: 0x0 00:23:04.393 ----------- 00:23:04.393 Entry: 1 00:23:04.393 Error Count: 0x2 00:23:04.393 Submission Queue Id: 0x0 00:23:04.393 Command Id: 0x5 00:23:04.393 Phase Bit: 0 00:23:04.393 Status Code: 0x2 00:23:04.393 Status Code Type: 0x0 00:23:04.393 Do Not Retry: 1 00:23:04.393 Error Location: 0x28 00:23:04.393 LBA: 0x0 00:23:04.393 Namespace: 0x0 00:23:04.393 Vendor Log Page: 0x0 00:23:04.393 ----------- 00:23:04.393 Entry: 2 00:23:04.393 Error Count: 0x1 00:23:04.393 Submission Queue Id: 0x0 00:23:04.393 Command Id: 0x4 00:23:04.393 Phase Bit: 0 00:23:04.393 Status Code: 0x2 00:23:04.393 Status Code Type: 0x0 00:23:04.393 Do Not Retry: 1 00:23:04.393 Error Location: 0x28 00:23:04.393 LBA: 0x0 00:23:04.393 Namespace: 0x0 00:23:04.393 Vendor Log Page: 0x0 00:23:04.393 00:23:04.393 Number of Queues 00:23:04.393 ================ 00:23:04.393 Number of I/O Submission Queues: 128 00:23:04.393 Number of I/O Completion Queues: 128 00:23:04.393 00:23:04.393 ZNS Specific Controller Data 00:23:04.393 ============================ 00:23:04.393 Zone Append Size Limit: 0 00:23:04.393 00:23:04.393 00:23:04.393 Active Namespaces 00:23:04.393 ================= 00:23:04.393 get_feature(0x05) failed 00:23:04.393 Namespace ID:1 00:23:04.393 Command Set Identifier: NVM (00h) 00:23:04.393 Deallocate: Supported 00:23:04.393 Deallocated/Unwritten Error: Not Supported 00:23:04.393 Deallocated Read Value: Unknown 00:23:04.393 Deallocate in Write Zeroes: Not Supported 00:23:04.393 Deallocated Guard Field: 0xFFFF 00:23:04.393 Flush: Supported 00:23:04.393 Reservation: Not Supported 00:23:04.393 Namespace Sharing Capabilities: Multiple Controllers 00:23:04.393 Size (in LBAs): 1953525168 (931GiB) 00:23:04.393 Capacity (in LBAs): 1953525168 (931GiB) 00:23:04.393 Utilization (in LBAs): 1953525168 (931GiB) 00:23:04.393 UUID: e417e84e-d393-47d8-a4f5-08da457fdb00 00:23:04.393 Thin Provisioning: Not Supported 00:23:04.393 Per-NS Atomic Units: Yes 00:23:04.393 Atomic Boundary Size (Normal): 0 00:23:04.393 Atomic Boundary Size (PFail): 0 00:23:04.393 Atomic Boundary Offset: 0 00:23:04.393 NGUID/EUI64 Never Reused: No 00:23:04.393 ANA group ID: 1 00:23:04.393 Namespace Write Protected: No 00:23:04.393 Number of LBA Formats: 1 00:23:04.393 Current LBA Format: LBA Format #00 00:23:04.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:04.393 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.393 rmmod nvme_tcp 00:23:04.393 rmmod nvme_fabrics 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.393 17:12:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:06.930 17:12:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:07.865 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.865 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.865 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:08.829 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:08.829 00:23:08.829 real 0m9.510s 00:23:08.829 user 0m1.987s 00:23:08.829 sys 0m3.461s 00:23:08.829 17:12:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.829 17:12:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.829 ************************************ 00:23:08.829 END TEST nvmf_identify_kernel_target 00:23:08.829 ************************************ 00:23:08.829 17:12:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:08.829 17:12:08 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:08.829 17:12:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.829 17:12:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.829 17:12:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.829 ************************************ 00:23:08.829 START TEST nvmf_auth_host 00:23:08.829 ************************************ 00:23:08.829 17:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:09.088 * Looking for test storage... 00:23:09.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.088 17:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.990 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:10.991 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:10.991 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:10.991 Found net devices under 0000:84:00.0: cvl_0_0 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:10.991 Found net devices under 0000:84:00.1: cvl_0_1 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:23:10.991 00:23:10.991 --- 10.0.0.2 ping statistics --- 00:23:10.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.991 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:10.991 00:23:10.991 --- 10.0.0.1 ping statistics --- 00:23:10.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.991 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1209742 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1209742 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1209742 ']' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.991 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.556 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.557 17:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=84e0503c2e7c9dc951e35316b1cb6a31 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.x3D 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 84e0503c2e7c9dc951e35316b1cb6a31 0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 84e0503c2e7c9dc951e35316b1cb6a31 0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=84e0503c2e7c9dc951e35316b1cb6a31 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.x3D 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.x3D 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x3D 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c19df5ae8dd0e9858dbc66c6d0a53292525bf3be95941754f08404001494a2a 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eSe 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c19df5ae8dd0e9858dbc66c6d0a53292525bf3be95941754f08404001494a2a 3 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c19df5ae8dd0e9858dbc66c6d0a53292525bf3be95941754f08404001494a2a 3 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c19df5ae8dd0e9858dbc66c6d0a53292525bf3be95941754f08404001494a2a 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eSe 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eSe 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.eSe 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc3a4b8b87e44a14d96412213543ae8db6183975ff1da2be 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ABa 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc3a4b8b87e44a14d96412213543ae8db6183975ff1da2be 0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc3a4b8b87e44a14d96412213543ae8db6183975ff1da2be 0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc3a4b8b87e44a14d96412213543ae8db6183975ff1da2be 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ABa 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ABa 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ABa 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccacf92fcee06a288a1e390c922033a4bcf03aba719bd7c8 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6UR 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccacf92fcee06a288a1e390c922033a4bcf03aba719bd7c8 2 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccacf92fcee06a288a1e390c922033a4bcf03aba719bd7c8 2 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccacf92fcee06a288a1e390c922033a4bcf03aba719bd7c8 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6UR 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6UR 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6UR 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b063c0e3a858107ceacbad058a991039 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y0t 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b063c0e3a858107ceacbad058a991039 1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b063c0e3a858107ceacbad058a991039 1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b063c0e3a858107ceacbad058a991039 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.557 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y0t 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y0t 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.y0t 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7bfa47bb0bd6cdfa4aadf05de407145f 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UYW 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7bfa47bb0bd6cdfa4aadf05de407145f 1 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7bfa47bb0bd6cdfa4aadf05de407145f 1 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7bfa47bb0bd6cdfa4aadf05de407145f 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UYW 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UYW 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UYW 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=939c99e498b1ffefd88b6ee0a376fac4f5018ad31fffcdba 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NVB 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 939c99e498b1ffefd88b6ee0a376fac4f5018ad31fffcdba 2 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 939c99e498b1ffefd88b6ee0a376fac4f5018ad31fffcdba 2 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=939c99e498b1ffefd88b6ee0a376fac4f5018ad31fffcdba 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NVB 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NVB 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NVB 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=808a68aa1af233bffa631fc0cdd115c6 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.815 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9et 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 808a68aa1af233bffa631fc0cdd115c6 0 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 808a68aa1af233bffa631fc0cdd115c6 0 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=808a68aa1af233bffa631fc0cdd115c6 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9et 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9et 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9et 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fef5c765184394fb77003026a2f0f771735cb9f09b87b19d589e5c02ca002dde 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4MD 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fef5c765184394fb77003026a2f0f771735cb9f09b87b19d589e5c02ca002dde 3 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fef5c765184394fb77003026a2f0f771735cb9f09b87b19d589e5c02ca002dde 3 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fef5c765184394fb77003026a2f0f771735cb9f09b87b19d589e5c02ca002dde 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4MD 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4MD 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4MD 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1209742 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1209742 ']' 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.816 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x3D 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.eSe ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eSe 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ABa 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6UR ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6UR 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.y0t 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UYW ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UYW 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NVB 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9et ]] 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9et 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.074 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4MD 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:12.332 17:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:13.266 Waiting for block devices as requested 00:23:13.266 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:13.266 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:13.266 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.523 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.523 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.523 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.781 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.781 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:13.781 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:13.781 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:14.039 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:14.039 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:14.039 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:14.296 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:14.296 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:14.296 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:14.296 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:14.860 No valid GPT data, bailing 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:14.860 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:14.860 00:23:14.860 Discovery Log Number of Records 2, Generation counter 2 00:23:14.860 =====Discovery Log Entry 0====== 00:23:14.860 trtype: tcp 00:23:14.860 adrfam: ipv4 00:23:14.860 subtype: current discovery subsystem 00:23:14.860 treq: not specified, sq flow control disable supported 00:23:14.860 portid: 1 00:23:14.860 trsvcid: 4420 00:23:14.860 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:14.860 traddr: 10.0.0.1 00:23:14.860 eflags: none 00:23:14.860 sectype: none 00:23:14.860 =====Discovery Log Entry 1====== 00:23:14.860 trtype: tcp 00:23:14.860 adrfam: ipv4 00:23:14.860 subtype: nvme subsystem 00:23:14.860 treq: not specified, sq flow control disable supported 00:23:14.860 portid: 1 00:23:14.860 trsvcid: 4420 00:23:14.860 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:14.860 traddr: 10.0.0.1 00:23:14.860 eflags: none 00:23:14.860 sectype: none 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.861 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.118 nvme0n1 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.118 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 nvme0n1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.375 17:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 nvme0n1 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 nvme0n1 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.633 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 nvme0n1 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.891 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 nvme0n1 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.148 17:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.149 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.149 17:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.406 nvme0n1 00:23:16.406 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.406 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.407 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.665 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.666 nvme0n1 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.666 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.923 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.924 nvme0n1 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.924 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.182 nvme0n1 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.182 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.183 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.440 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.441 17:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.698 nvme0n1 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.698 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.957 nvme0n1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.957 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.214 nvme0n1 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.214 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.472 17:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.730 nvme0n1 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.730 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.731 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.989 nvme0n1 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.989 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.246 17:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 nvme0n1 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.503 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.504 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.068 nvme0n1 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.068 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.069 17:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.634 nvme0n1 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.634 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.635 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.198 nvme0n1 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.198 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.199 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.456 17:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.021 nvme0n1 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.021 17:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.586 nvme0n1 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.586 17:12:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.515 nvme0n1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.515 17:12:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.884 nvme0n1 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.884 17:12:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.448 nvme0n1 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.448 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.705 17:12:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 nvme0n1 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.659 17:12:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 nvme0n1 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.591 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.849 nvme0n1 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:27.849 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.850 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 nvme0n1 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 nvme0n1 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.109 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 nvme0n1 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.367 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 nvme0n1 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.625 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.882 nvme0n1 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.882 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 nvme0n1 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.140 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.397 17:12:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 nvme0n1 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.655 nvme0n1 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.655 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.913 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.182 nvme0n1 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.182 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.482 nvme0n1 00:23:30.482 17:12:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.482 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.482 17:12:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.482 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.741 nvme0n1 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.741 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.306 nvme0n1 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.306 17:12:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 nvme0n1 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.564 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.822 nvme0n1 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.822 17:12:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.387 nvme0n1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.387 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.321 nvme0n1 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.321 17:12:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 nvme0n1 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.885 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.450 nvme0n1 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.450 17:12:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.015 nvme0n1 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:35.015 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.016 17:12:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.948 nvme0n1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.948 17:12:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 nvme0n1 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.881 17:12:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.814 nvme0n1 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.814 17:12:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.747 nvme0n1 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.747 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.004 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.005 17:12:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 nvme0n1 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 nvme0n1 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.195 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.196 nvme0n1 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.196 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.454 17:12:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.454 nvme0n1 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.454 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.712 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.713 nvme0n1 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.713 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.971 nvme0n1 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.971 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.230 nvme0n1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.230 17:12:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.488 nvme0n1 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.488 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.746 nvme0n1 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.746 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.004 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.005 nvme0n1 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.005 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.262 nvme0n1 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.262 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.519 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.520 17:12:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.520 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.777 nvme0n1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.777 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.035 nvme0n1 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.035 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.036 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.293 nvme0n1 00:23:43.293 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.551 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.551 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.551 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.551 17:12:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.551 17:12:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.551 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.809 nvme0n1 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.809 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.374 nvme0n1 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.374 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.375 17:12:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.375 17:12:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.375 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.375 17:12:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.940 nvme0n1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.940 17:12:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.505 nvme0n1 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.506 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.071 nvme0n1 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.071 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.327 17:12:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.889 nvme0n1 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.889 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.890 17:12:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 nvme0n1 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRlMDUwM2MyZTdjOWRjOTUxZTM1MzE2YjFjYjZhMzEZWmBk: 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMxOWRmNWFlOGRkMGU5ODU4ZGJjNjZjNmQwYTUzMjkyNTI1YmYzYmU5NTk0MTc1NGYwODQwNDAwMTQ5NGEyYSPY7SA=: 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.452 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.380 nvme0n1 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.380 17:12:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.380 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.381 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.311 nvme0n1 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA2M2MwZTNhODU4MTA3Y2VhY2JhZDA1OGE5OTEwMzlhdiIR: 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: ]] 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2JmYTQ3YmIwYmQ2Y2RmYTRhYWRmMDVkZTQwNzE0NWZwM8hh: 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.311 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.312 17:12:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.245 nvme0n1 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.245 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTM5Yzk5ZTQ5OGIxZmZlZmQ4OGI2ZWUwYTM3NmZhYzRmNTAxOGFkMzFmZmZjZGJhpCNdxg==: 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODA4YTY4YWExYWYyMzNiZmZhNjMxZmMwY2RkMTE1YzaS0lGr: 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.503 17:12:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.435 nvme0n1 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmVmNWM3NjUxODQzOTRmYjc3MDAzMDI2YTJmMGY3NzE3MzVjYjlmMDliODdiMTlkNTg5ZTVjMDJjYTAwMmRkZTW1rAU=: 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.435 17:12:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 nvme0n1 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzYTRiOGI4N2U0NGExNGQ5NjQxMjIxMzU0M2FlOGRiNjE4Mzk3NWZmMWRhMmJlEpv5Fw==: 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: ]] 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2NhY2Y5MmZjZWUwNmEyODhhMWUzOTBjOTIyMDMzYTRiY2YwM2FiYTcxOWJkN2M46BUmGw==: 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.368 17:12:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.368 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.626 request: 00:23:52.626 { 00:23:52.626 "name": "nvme0", 00:23:52.626 "trtype": "tcp", 00:23:52.626 "traddr": "10.0.0.1", 00:23:52.626 "adrfam": "ipv4", 00:23:52.626 "trsvcid": "4420", 00:23:52.626 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.626 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.626 "prchk_reftag": false, 00:23:52.626 "prchk_guard": false, 00:23:52.626 "hdgst": false, 00:23:52.626 "ddgst": false, 00:23:52.626 "method": "bdev_nvme_attach_controller", 00:23:52.626 "req_id": 1 00:23:52.626 } 00:23:52.626 Got JSON-RPC error response 00:23:52.626 response: 00:23:52.626 { 00:23:52.626 "code": -5, 00:23:52.626 "message": "Input/output error" 00:23:52.626 } 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.626 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.627 request: 00:23:52.627 { 00:23:52.627 "name": "nvme0", 00:23:52.627 "trtype": "tcp", 00:23:52.627 "traddr": "10.0.0.1", 00:23:52.627 "adrfam": "ipv4", 00:23:52.627 "trsvcid": "4420", 00:23:52.627 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.627 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.627 "prchk_reftag": false, 00:23:52.627 "prchk_guard": false, 00:23:52.627 "hdgst": false, 00:23:52.627 "ddgst": false, 00:23:52.627 "dhchap_key": "key2", 00:23:52.627 "method": "bdev_nvme_attach_controller", 00:23:52.627 "req_id": 1 00:23:52.627 } 00:23:52.627 Got JSON-RPC error response 00:23:52.627 response: 00:23:52.627 { 00:23:52.627 "code": -5, 00:23:52.627 "message": "Input/output error" 00:23:52.627 } 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.627 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.885 request: 00:23:52.885 { 00:23:52.885 "name": "nvme0", 00:23:52.885 "trtype": "tcp", 00:23:52.885 "traddr": "10.0.0.1", 00:23:52.885 "adrfam": "ipv4", 00:23:52.885 "trsvcid": "4420", 00:23:52.885 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.885 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.885 "prchk_reftag": false, 00:23:52.885 "prchk_guard": false, 00:23:52.885 "hdgst": false, 00:23:52.885 "ddgst": false, 00:23:52.885 "dhchap_key": "key1", 00:23:52.885 "dhchap_ctrlr_key": "ckey2", 00:23:52.885 "method": "bdev_nvme_attach_controller", 00:23:52.885 "req_id": 1 00:23:52.885 } 00:23:52.885 Got JSON-RPC error response 00:23:52.885 response: 00:23:52.885 { 00:23:52.885 "code": -5, 00:23:52.885 "message": "Input/output error" 00:23:52.885 } 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.885 rmmod nvme_tcp 00:23:52.885 rmmod nvme_fabrics 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1209742 ']' 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1209742 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1209742 ']' 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1209742 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209742 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209742' 00:23:52.885 killing process with pid 1209742 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1209742 00:23:52.885 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1209742 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.147 17:12:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:55.079 17:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.450 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.450 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.450 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:57.384 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:57.642 17:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x3D /tmp/spdk.key-null.ABa /tmp/spdk.key-sha256.y0t /tmp/spdk.key-sha384.NVB /tmp/spdk.key-sha512.4MD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:57.642 17:12:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.016 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:59.016 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:59.016 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.016 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.016 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.016 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.016 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.016 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.016 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.016 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:59.016 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.016 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.016 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.016 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.016 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.016 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.016 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.016 00:23:59.016 real 0m50.051s 00:23:59.016 user 0m47.718s 00:23:59.016 sys 0m5.718s 00:23:59.016 17:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.016 17:12:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.016 ************************************ 00:23:59.016 END TEST nvmf_auth_host 00:23:59.016 ************************************ 00:23:59.016 17:12:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:59.016 17:12:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:59.016 17:12:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.016 17:12:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:59.016 17:12:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.016 17:12:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.016 ************************************ 00:23:59.016 START TEST nvmf_digest 00:23:59.016 ************************************ 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.016 * Looking for test storage... 00:23:59.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.016 17:12:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.549 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.549 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.549 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:01.550 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:01.550 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:01.550 Found net devices under 0000:84:00.0: cvl_0_0 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:01.550 Found net devices under 0000:84:00.1: cvl_0_1 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:01.550 00:24:01.550 --- 10.0.0.2 ping statistics --- 00:24:01.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.550 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:24:01.550 00:24:01.550 --- 10.0.0.1 ping statistics --- 00:24:01.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.550 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.550 ************************************ 00:24:01.550 START TEST nvmf_digest_clean 00:24:01.550 ************************************ 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1219262 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1219262 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1219262 ']' 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.550 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.551 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.551 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.551 17:13:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.551 [2024-07-12 17:13:00.878563] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:01.551 [2024-07-12 17:13:00.878650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.551 [2024-07-12 17:13:00.942643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.551 [2024-07-12 17:13:01.048893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.551 [2024-07-12 17:13:01.048946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.551 [2024-07-12 17:13:01.048974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.551 [2024-07-12 17:13:01.048986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.551 [2024-07-12 17:13:01.048996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.551 [2024-07-12 17:13:01.049042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.551 null0 00:24:01.551 [2024-07-12 17:13:01.203496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.551 [2024-07-12 17:13:01.227683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1219284 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1219284 /var/tmp/bperf.sock 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1219284 ']' 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.551 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.809 [2024-07-12 17:13:01.272998] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:01.809 [2024-07-12 17:13:01.273074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219284 ] 00:24:01.809 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.809 [2024-07-12 17:13:01.330137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.809 [2024-07-12 17:13:01.438521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.809 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.809 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:01.809 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.809 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.809 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:02.376 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.376 17:13:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.941 nvme0n1 00:24:02.942 17:13:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.942 17:13:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.942 Running I/O for 2 seconds... 00:24:04.840 00:24:04.840 Latency(us) 00:24:04.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.840 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:04.840 nvme0n1 : 2.00 20360.76 79.53 0.00 0.00 6280.96 2730.67 15534.46 00:24:04.840 =================================================================================================================== 00:24:04.840 Total : 20360.76 79.53 0.00 0.00 6280.96 2730.67 15534.46 00:24:04.840 0 00:24:04.840 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.840 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.840 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.840 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.840 | select(.opcode=="crc32c") 00:24:04.840 | "\(.module_name) \(.executed)"' 00:24:04.840 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1219284 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1219284 ']' 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1219284 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219284 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219284' 00:24:05.098 killing process with pid 1219284 00:24:05.098 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1219284 00:24:05.098 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.098 00:24:05.099 Latency(us) 00:24:05.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.099 =================================================================================================================== 00:24:05.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.099 17:13:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1219284 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1219734 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1219734 /var/tmp/bperf.sock 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1219734 ']' 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.356 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.356 [2024-07-12 17:13:05.049245] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:05.356 [2024-07-12 17:13:05.049324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219734 ] 00:24:05.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.356 Zero copy mechanism will not be used. 00:24:05.613 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.613 [2024-07-12 17:13:05.112290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.613 [2024-07-12 17:13:05.219655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.613 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.613 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:05.613 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.613 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.613 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.178 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.178 17:13:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.435 nvme0n1 00:24:06.435 17:13:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.435 17:13:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.694 Zero copy mechanism will not be used. 00:24:06.694 Running I/O for 2 seconds... 00:24:08.592 00:24:08.592 Latency(us) 00:24:08.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.592 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:08.592 nvme0n1 : 2.00 5171.99 646.50 0.00 0.00 3089.76 564.34 10048.85 00:24:08.592 =================================================================================================================== 00:24:08.592 Total : 5171.99 646.50 0.00 0.00 3089.76 564.34 10048.85 00:24:08.592 0 00:24:08.592 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:08.592 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:08.592 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:08.592 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:08.592 | select(.opcode=="crc32c") 00:24:08.592 | "\(.module_name) \(.executed)"' 00:24:08.592 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1219734 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1219734 ']' 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1219734 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219734 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219734' 00:24:08.850 killing process with pid 1219734 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1219734 00:24:08.850 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.850 00:24:08.850 Latency(us) 00:24:08.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.850 =================================================================================================================== 00:24:08.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.850 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1219734 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1220239 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1220239 /var/tmp/bperf.sock 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1220239 ']' 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.108 17:13:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.366 [2024-07-12 17:13:08.808662] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:09.366 [2024-07-12 17:13:08.808761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220239 ] 00:24:09.366 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.366 [2024-07-12 17:13:08.866475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.366 [2024-07-12 17:13:08.976260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.366 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.366 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:09.366 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.366 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.366 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:09.949 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.949 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.206 nvme0n1 00:24:10.206 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:10.206 17:13:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.206 Running I/O for 2 seconds... 00:24:12.730 00:24:12.730 Latency(us) 00:24:12.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.730 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:12.730 nvme0n1 : 2.01 23836.89 93.11 0.00 0.00 5361.97 2233.08 15534.46 00:24:12.730 =================================================================================================================== 00:24:12.730 Total : 23836.89 93.11 0.00 0.00 5361.97 2233.08 15534.46 00:24:12.730 0 00:24:12.730 17:13:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:12.730 17:13:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:12.730 17:13:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:12.730 17:13:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:12.730 17:13:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:12.730 | select(.opcode=="crc32c") 00:24:12.730 | "\(.module_name) \(.executed)"' 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1220239 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1220239 ']' 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1220239 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220239 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220239' 00:24:12.730 killing process with pid 1220239 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1220239 00:24:12.730 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.730 00:24:12.730 Latency(us) 00:24:12.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.730 =================================================================================================================== 00:24:12.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.730 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1220239 00:24:12.988 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:12.988 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:12.988 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:12.988 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1220644 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1220644 /var/tmp/bperf.sock 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1220644 ']' 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.989 [2024-07-12 17:13:12.480797] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:12.989 [2024-07-12 17:13:12.480889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220644 ] 00:24:12.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.989 Zero copy mechanism will not be used. 00:24:12.989 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.989 [2024-07-12 17:13:12.539217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.989 [2024-07-12 17:13:12.644706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:12.989 17:13:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.553 17:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.554 17:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.823 nvme0n1 00:24:13.823 17:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:13.823 17:13:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.823 Zero copy mechanism will not be used. 00:24:13.823 Running I/O for 2 seconds... 00:24:16.354 00:24:16.354 Latency(us) 00:24:16.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.354 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:16.354 nvme0n1 : 2.00 4680.18 585.02 0.00 0.00 3411.41 2621.44 10582.85 00:24:16.354 =================================================================================================================== 00:24:16.354 Total : 4680.18 585.02 0.00 0.00 3411.41 2621.44 10582.85 00:24:16.354 0 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:16.354 | select(.opcode=="crc32c") 00:24:16.354 | "\(.module_name) \(.executed)"' 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1220644 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1220644 ']' 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1220644 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220644 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220644' 00:24:16.354 killing process with pid 1220644 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1220644 00:24:16.354 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.354 00:24:16.354 Latency(us) 00:24:16.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.354 =================================================================================================================== 00:24:16.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.354 17:13:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1220644 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1219262 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1219262 ']' 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1219262 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219262 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219262' 00:24:16.354 killing process with pid 1219262 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1219262 00:24:16.354 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1219262 00:24:16.919 00:24:16.919 real 0m15.484s 00:24:16.919 user 0m29.852s 00:24:16.919 sys 0m5.127s 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:16.919 ************************************ 00:24:16.919 END TEST nvmf_digest_clean 00:24:16.919 ************************************ 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.919 ************************************ 00:24:16.919 START TEST nvmf_digest_error 00:24:16.919 ************************************ 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1221157 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1221157 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1221157 ']' 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.919 [2024-07-12 17:13:16.403347] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:16.919 [2024-07-12 17:13:16.403421] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.919 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.919 [2024-07-12 17:13:16.466991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.919 [2024-07-12 17:13:16.574406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.919 [2024-07-12 17:13:16.574456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.919 [2024-07-12 17:13:16.574485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.919 [2024-07-12 17:13:16.574496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.919 [2024-07-12 17:13:16.574506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.919 [2024-07-12 17:13:16.574531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.919 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.177 [2024-07-12 17:13:16.643081] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.177 null0 00:24:17.177 [2024-07-12 17:13:16.761886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.177 [2024-07-12 17:13:16.786103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1221228 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1221228 /var/tmp/bperf.sock 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1221228 ']' 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.177 17:13:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.177 [2024-07-12 17:13:16.829858] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:17.177 [2024-07-12 17:13:16.829940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221228 ] 00:24:17.177 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.434 [2024-07-12 17:13:16.888111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.434 [2024-07-12 17:13:16.993608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.434 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.434 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:17.434 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.434 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.027 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:18.027 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.028 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.028 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.028 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.028 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.028 nvme0n1 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.285 17:13:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.285 Running I/O for 2 seconds... 00:24:18.285 [2024-07-12 17:13:17.870174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.870243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.870263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.882299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.882327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.882358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.893571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.893600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.893630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.906632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.906667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.906697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.917183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.917210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.917242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.929501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.929528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.929560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.939647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.939674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.285 [2024-07-12 17:13:17.939704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.285 [2024-07-12 17:13:17.953836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.285 [2024-07-12 17:13:17.953864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.286 [2024-07-12 17:13:17.953895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.286 [2024-07-12 17:13:17.965223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.286 [2024-07-12 17:13:17.965261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.286 [2024-07-12 17:13:17.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.286 [2024-07-12 17:13:17.979385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.286 [2024-07-12 17:13:17.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.286 [2024-07-12 17:13:17.979433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:17.993448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:17.993481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:17.993499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.004703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.004754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.004772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.020291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.020319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.020351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.030662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.030689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.030719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.042859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.042887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.042918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.054524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.054551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.054583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.066255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.066292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.076195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.076222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.076253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.088171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.088198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.088229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.101414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.101441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.101477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.114321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.114347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.114378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.124622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.124659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.124695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.140164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.140201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.140231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.155058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.155085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.155114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.169915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.169944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.169975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.181603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.181629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.181665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.191199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.191226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.191256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.202829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.202857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.202889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.215184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.215211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.215241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.225559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.225586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.225617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.578 [2024-07-12 17:13:18.239881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.578 [2024-07-12 17:13:18.239912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.578 [2024-07-12 17:13:18.239929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.252889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.252920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.252937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.264229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.264256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.264286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.277131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.277158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.277188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.288237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.288268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.288299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.300942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.300969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.301000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.312115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.312142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.312173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.322574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.322604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.322635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.336309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.336340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.336370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.346294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.346320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.346349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.359638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.359664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.359695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.369518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.369552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.369583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.384254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.384291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.384321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.399184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.399211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.399241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.410929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.410988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.421250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.421278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.421308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.435794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.435821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.435852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.445645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.445672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.445702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.459568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.459597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.459628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.472990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.862 [2024-07-12 17:13:18.473028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.862 [2024-07-12 17:13:18.473046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.862 [2024-07-12 17:13:18.484010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.484037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.484053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.863 [2024-07-12 17:13:18.497696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.497723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.497768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.863 [2024-07-12 17:13:18.509832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.509859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.509890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.863 [2024-07-12 17:13:18.520607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.520634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.520666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.863 [2024-07-12 17:13:18.533643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.533671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.533701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.863 [2024-07-12 17:13:18.544196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:18.863 [2024-07-12 17:13:18.544223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.863 [2024-07-12 17:13:18.544254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.557124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.557152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.557182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.569913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.569944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.569961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.580335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.580362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.580391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.595672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.595714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.595731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.607423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.607452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.607484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.618671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.618698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.618746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.632655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.632682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.632714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.643085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.643126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.643142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.658176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.120 [2024-07-12 17:13:18.658205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-12 17:13:18.658235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-12 17:13:18.670156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.670222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.682152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.682179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.682210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.693928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.693957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.693989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.705351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.705378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.705414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.717699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.717768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.729362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.729389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.729420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.741323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.741380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.754629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.754655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.754686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.766038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.766065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.766081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.777589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.777616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.777647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.790437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.790464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.790494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.800348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.800374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.121 [2024-07-12 17:13:18.812463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.121 [2024-07-12 17:13:18.812497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.121 [2024-07-12 17:13:18.812515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.378 [2024-07-12 17:13:18.824581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.378 [2024-07-12 17:13:18.824608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.378 [2024-07-12 17:13:18.824638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.378 [2024-07-12 17:13:18.834828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.378 [2024-07-12 17:13:18.834855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.378 [2024-07-12 17:13:18.834887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.378 [2024-07-12 17:13:18.846839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.378 [2024-07-12 17:13:18.846866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.378 [2024-07-12 17:13:18.846897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.378 [2024-07-12 17:13:18.857398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.378 [2024-07-12 17:13:18.857425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.378 [2024-07-12 17:13:18.857455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.869779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.869806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.869838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.882099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.882125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.894703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.894754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.894771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.905709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.905757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.905773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.921027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.921069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.921085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.934935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.934962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.934993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.945264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.945290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.945321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.958965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.958993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.959024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.970234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.970292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.980791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.980819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.980851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:18.995987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:18.996029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:18.996045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:19.010201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:19.010229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:19.010261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:19.021010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:19.021038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:19.021059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:19.034585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:19.034612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:19.034642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:19.049511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:19.049538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:19.049569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-12 17:13:19.062908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.379 [2024-07-12 17:13:19.062949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-12 17:13:19.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.074335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.074361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.074391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.089632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.089659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.089689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.099585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.099642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.113455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.113481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.113512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.128257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.128284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.128314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.142195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.142226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.142257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.152694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.152735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.168693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.168734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.168758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.183515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.183543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.183573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.198411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.198438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.198469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.210325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.210352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.210382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.222143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.222183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.222198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.232146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.232173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.232204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.244173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.244200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.244230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.257169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.257196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.257226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.266951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.266978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.267009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.281325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.281351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.281382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.296294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.296320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.296351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.311488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.311515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.637 [2024-07-12 17:13:19.326400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.637 [2024-07-12 17:13:19.326426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.637 [2024-07-12 17:13:19.326456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.340714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.340763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.340779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.357100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.357127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.357157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.371781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.371809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.371845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.385022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.385064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.385079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.395886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.395912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.395943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.411563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.411589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.411620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.421282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.421309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.421340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.435355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.435382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.435413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.450783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.450811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.450843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.465843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.465870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.465901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.480490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.480517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.480547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.490179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.490206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.490235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.504595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.504622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.517056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.517083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.517097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.526772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.526799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.526830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.539415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.539472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.550064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.550090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.550120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.562708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.562758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.562775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.576047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.576090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.576106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.895 [2024-07-12 17:13:19.587146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:19.895 [2024-07-12 17:13:19.587175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.895 [2024-07-12 17:13:19.587216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.601572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.601599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.611917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.611944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.611975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.625684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.625711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.625750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.640992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.641020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.641035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.651465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.651491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.651522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.666117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.666145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.666176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.680126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.690364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.690391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.690423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.705597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.705630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.705661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.715750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.715778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.715795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.730366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.730393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.730422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.153 [2024-07-12 17:13:19.740454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.153 [2024-07-12 17:13:19.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.153 [2024-07-12 17:13:19.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.754070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.754098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.754129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.766845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.766874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.766891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.777779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.777807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.777824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.792470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.792497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.792529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.806542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.806569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.806600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.821460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.821488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.821519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.836988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.837016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.837058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.154 [2024-07-12 17:13:19.847094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9c2280) 00:24:20.154 [2024-07-12 17:13:19.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.154 [2024-07-12 17:13:19.847140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.411 00:24:20.411 Latency(us) 00:24:20.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.411 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:20.411 nvme0n1 : 2.01 20216.45 78.97 0.00 0.00 6322.28 3276.80 21359.88 00:24:20.411 =================================================================================================================== 00:24:20.411 Total : 20216.45 78.97 0.00 0.00 6322.28 3276.80 21359.88 00:24:20.411 0 00:24:20.411 17:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.411 17:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.411 17:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:20.411 17:13:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.411 | .driver_specific 00:24:20.411 | .nvme_error 00:24:20.411 | .status_code 00:24:20.411 | .command_transient_transport_error' 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1221228 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1221228 ']' 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1221228 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221228 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221228' 00:24:20.669 killing process with pid 1221228 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1221228 00:24:20.669 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.669 00:24:20.669 Latency(us) 00:24:20.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.669 =================================================================================================================== 00:24:20.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.669 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1221228 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1221634 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1221634 /var/tmp/bperf.sock 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1221634 ']' 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.927 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.927 [2024-07-12 17:13:20.429685] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:20.927 [2024-07-12 17:13:20.429779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221634 ] 00:24:20.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.927 Zero copy mechanism will not be used. 00:24:20.927 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.927 [2024-07-12 17:13:20.486918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.927 [2024-07-12 17:13:20.593118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.184 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.184 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:21.184 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.184 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.441 17:13:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.006 nvme0n1 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:22.006 17:13:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.006 Zero copy mechanism will not be used. 00:24:22.006 Running I/O for 2 seconds... 00:24:22.006 [2024-07-12 17:13:21.557080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.006 [2024-07-12 17:13:21.557140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.006 [2024-07-12 17:13:21.557174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.562284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.562312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.562343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.567517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.567558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.567575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.573338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.573366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.573397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.578613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.578671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.583955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.583986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.584020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.589354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.589401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.589416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.594884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.594913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.594945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.600300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.600327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.600357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.605516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.605556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.605572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.610703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.610775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.616045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.616105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.622036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.622064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.622079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.626648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.626674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.626704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.632013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.632055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.632071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.637307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.637334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.642867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.642895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.642926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.648086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.648112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.648143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.653294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.653320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.658456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.658512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.663734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.663769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.663800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.669072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.669132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.674456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.674482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.674512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.679864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.679892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.679928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.685241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.685268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.685298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.690390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.690416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.690447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.007 [2024-07-12 17:13:21.695530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.007 [2024-07-12 17:13:21.695557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.007 [2024-07-12 17:13:21.695587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.702013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.702069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.707083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.707124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.707139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.712444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.712470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.712500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.717800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.717838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.717870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.723108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.723134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.723174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.728447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.728477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.728508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.733804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.733831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.733861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.739016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.739049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.739078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.744352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.744407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.749522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.749550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.754734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.754777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.754809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.760151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.760177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.760207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.765391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.765417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.765457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.770766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.770794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.770829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.776180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.776213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.776243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.781584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.781609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.781640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.786866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.786893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.786924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.792127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.792153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.792183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.797455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.797482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.797513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.802841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.802868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.802898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.808089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.808145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.813558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.813585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.813616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.818756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.818787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.818828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.266 [2024-07-12 17:13:21.824081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.266 [2024-07-12 17:13:21.824107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.266 [2024-07-12 17:13:21.824136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.829251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.829286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.829317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.835305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.835331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.835361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.839934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.839961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.839992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.845472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.845498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.845529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.850824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.850860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.850892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.856265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.856302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.856331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.861610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.861636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.861667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.866851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.866878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.866916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.872070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.872100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.872131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.877348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.877373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.877403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.882702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.882730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.882767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.888097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.888123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.893309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.893335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.893365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.899435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.899462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.899493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.904029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.904070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.909362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.909388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.909428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.914541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.914568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.914598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.919733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.919766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.919781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.924902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.924930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.924961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.930173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.930201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.930230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.935672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.935699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.935730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.941261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.941287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.941317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.946549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.946575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.946605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.952568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.952594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.952625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.267 [2024-07-12 17:13:21.957354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.267 [2024-07-12 17:13:21.957384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.267 [2024-07-12 17:13:21.957401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.962847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.962889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.962905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.968256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.968288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.968318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.974776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.974807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.974825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.982822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.982855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.982873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.990758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.990790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.990807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:21.999472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:21.999500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:21.999531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:22.006808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:22.006839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:22.006871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:22.015437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:22.015466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.525 [2024-07-12 17:13:22.015502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.525 [2024-07-12 17:13:22.023458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.525 [2024-07-12 17:13:22.023487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.023518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.031884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.031914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.031945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.040121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.040159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.040190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.049131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.049165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.049197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.056813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.056842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.056873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.065180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.065240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.074154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.074183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.074214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.082846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.082875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.082906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.090430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.090466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.090501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.097088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.097116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.097155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.103246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.103284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.109989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.110018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.110051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.116452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.116480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.116511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.123505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.123533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.123564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.130253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.130315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.137137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.137206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.143704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.143753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.143795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.149883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.149926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.149943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.155172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.155230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.161264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.161292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.161322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.168131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.168160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.168189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.176045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.176089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.176105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.180384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.180411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.180442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.188193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.188220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.188261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.196191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.196218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.196249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.203796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.203826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.203864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.526 [2024-07-12 17:13:22.212240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.526 [2024-07-12 17:13:22.212271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.526 [2024-07-12 17:13:22.212303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.784 [2024-07-12 17:13:22.220281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.784 [2024-07-12 17:13:22.220309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.228127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.228155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.228186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.235704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.235760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.235777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.243428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.243456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.243486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.251245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.251278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.251308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.258966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.258994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.266398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.266425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.266456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.273777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.273823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.273854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.281323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.281350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.281380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.287467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.287493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.287524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.292857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.292884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.292915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.298258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.298284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.298314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.304131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.304162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.304191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.309659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.309686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.309715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.315442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.315468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.315498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.321180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.321207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.321237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.326963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.326991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.327031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.332732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.332780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.332812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.338385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.338412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.338442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.343662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.343688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.343718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.349301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.349356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.354493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.354519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.354550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.360341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.360368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.360399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.365873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.365900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.365931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.370990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.371029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.371060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.376345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.376372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.376402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.381871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.381897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.381928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.387410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.387445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.387475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.392982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.393009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.785 [2024-07-12 17:13:22.393040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.785 [2024-07-12 17:13:22.398400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.785 [2024-07-12 17:13:22.398426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.398456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.404154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.404180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.410084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.410110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.410140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.415304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.415335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.415365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.420661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.420687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.420716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.425861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.425888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.425919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.431454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.431487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.431518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.436555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.436586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.436616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.441983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.442009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.447661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.447689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.447718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.453324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.453350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.453380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.458956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.458982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.459014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.464989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.465041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.465063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.471114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.471149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.471180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.786 [2024-07-12 17:13:22.477434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:22.786 [2024-07-12 17:13:22.477461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.786 [2024-07-12 17:13:22.477491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.483747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.483788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.483804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.489624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.489650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.489679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.495745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.495782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.495813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.502733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.502787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.502802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.509493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.509519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.509548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.516318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.516373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.523476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.523507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.523538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.530479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.530505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.537677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.537702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.537731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.545029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.545070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.545085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.552545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.552571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.552601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.560350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.560376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.560405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.568562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.568588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.568619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.576538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.576565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.576595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.584346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.584372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.584402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.592217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.592243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.592273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.599543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.599570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.599600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.605306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.605365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.610678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.610734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.616006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.616047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.616063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.621330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.621355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.621385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.626666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.626692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.626721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.632693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.632752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.632770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.639112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.639136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.045 [2024-07-12 17:13:22.639171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.045 [2024-07-12 17:13:22.645764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.045 [2024-07-12 17:13:22.645792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.645823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.651904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.651931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.651961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.657959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.657987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.658018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.664492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.664518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.664548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.671892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.671919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.671950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.679416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.679442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.679472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.686960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.686987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.687017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.694839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.694865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.694896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.702563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.702619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.710315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.710341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.718197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.718223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.718253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.725998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.726024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.726039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.046 [2024-07-12 17:13:22.733655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.046 [2024-07-12 17:13:22.733691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.046 [2024-07-12 17:13:22.733722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.742325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.750258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.750283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.750313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.758157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.758184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.758213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.766197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.766224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.766262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.774182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.774207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.774237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.781528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.781554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.781584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.789100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.789126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.789155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.796814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.796841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.796871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.804779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.804806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.804837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.812633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.812659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.812688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.820395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.820420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.820450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.828554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.828579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.828609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.836608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.836639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.836670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.844577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.844603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.852702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.852752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.852769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.860561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.860587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.860617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.868512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.868539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.868569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.876331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.876358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.876389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.884134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.884159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.884188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.891901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.891927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.899681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.899707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.907614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.907640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.907670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.915642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.316 [2024-07-12 17:13:22.915667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.316 [2024-07-12 17:13:22.915697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.316 [2024-07-12 17:13:22.923595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.923621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.923652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.931413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.931438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.931468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.939982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.940008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.948579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.948606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.956553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.956579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.956609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.964377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.964403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.964432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.972271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.972331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.980119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.980146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.980175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.988082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.988137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:22.996083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:22.996111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:22.996143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.317 [2024-07-12 17:13:23.004276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.317 [2024-07-12 17:13:23.004302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.317 [2024-07-12 17:13:23.004332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.580 [2024-07-12 17:13:23.012728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.580 [2024-07-12 17:13:23.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.580 [2024-07-12 17:13:23.012799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.580 [2024-07-12 17:13:23.021083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.580 [2024-07-12 17:13:23.021109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.580 [2024-07-12 17:13:23.021139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.580 [2024-07-12 17:13:23.026983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.580 [2024-07-12 17:13:23.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.580 [2024-07-12 17:13:23.027040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.580 [2024-07-12 17:13:23.032476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.580 [2024-07-12 17:13:23.032501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.580 [2024-07-12 17:13:23.032531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.580 [2024-07-12 17:13:23.038005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.038037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.038053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.043369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.043397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.043429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.048780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.054159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.054187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.054219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.059552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.059578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.059609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.065383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.065410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.070910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.070936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.070967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.076806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.076832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.076862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.082393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.082419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.082449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.087896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.087924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.087954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.093274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.093300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.093330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.098731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.098778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.098794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.104486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.104511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.104541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.110188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.110215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.110244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.116016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.116058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.116075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.121936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.121964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.121997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.127685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.127711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.127747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.133341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.133373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.133405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.139180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.139206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.139236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.144982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.145009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.145039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.150693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.150733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.150757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.156470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.156496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.156527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.162282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.162308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.162338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.168090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.168118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.173640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.173667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.173697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.179250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.179278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.179309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.184816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.184843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.184874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.190129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.190155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.190186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.581 [2024-07-12 17:13:23.195492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.581 [2024-07-12 17:13:23.195518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.581 [2024-07-12 17:13:23.195548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.200795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.200821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.200852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.206459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.206484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.206515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.213051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.213079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.213109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.220103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.220130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.220160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.225345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.225372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.225404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.229375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.229401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.229436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.234735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.234770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.240899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.240926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.240957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.248222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.248249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.248278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.255423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.255451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.255482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.263813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.263842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.263873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.582 [2024-07-12 17:13:23.269804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.582 [2024-07-12 17:13:23.269833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.582 [2024-07-12 17:13:23.269864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.276237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.276264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.276294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.282186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.282212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.282243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.287559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.287590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.287622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.293014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.293040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.293069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.298451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.298477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.298508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.303880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.303907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.303939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.309529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.309555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.309585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.315288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.315313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.315343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.840 [2024-07-12 17:13:23.321774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.840 [2024-07-12 17:13:23.321800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.840 [2024-07-12 17:13:23.321831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.327841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.327867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.327882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.333757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.333827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.339782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.339809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.339840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.346974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.347001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.347035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.354771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.354802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.354833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.361542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.361568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.361599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.368282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.368309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.368339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.374495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.374522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.374552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.380308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.380334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.380364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.386265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.386291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.386320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.392461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.392487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.392522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.398219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.398246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.398275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.404536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.404562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.404592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.411324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.411351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.411382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.418374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.418431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.425395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.425422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.425453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.431960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.431987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.432018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.438561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.438616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.445666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.445692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.445724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.452535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.452566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.452596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.458383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.458409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.458439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.464045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.464085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.464100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.470067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.470095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.470124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.476570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.476598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.476629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.483206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.483233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.483264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.490756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.490794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.490825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.497451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.497479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.497509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.504485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.504514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.504550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.511572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.841 [2024-07-12 17:13:23.511599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.841 [2024-07-12 17:13:23.511629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.841 [2024-07-12 17:13:23.519848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.842 [2024-07-12 17:13:23.519878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.842 [2024-07-12 17:13:23.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.842 [2024-07-12 17:13:23.529205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:23.842 [2024-07-12 17:13:23.529234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.842 [2024-07-12 17:13:23.529265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.099 [2024-07-12 17:13:23.539384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:24.099 [2024-07-12 17:13:23.539413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-07-12 17:13:23.539445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.099 [2024-07-12 17:13:23.548785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bd1d60) 00:24:24.099 [2024-07-12 17:13:23.548829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-07-12 17:13:23.548847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.099 00:24:24.099 Latency(us) 00:24:24.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:24.099 nvme0n1 : 2.00 4853.17 606.65 0.00 0.00 3292.01 885.95 12184.84 00:24:24.099 =================================================================================================================== 00:24:24.099 Total : 4853.17 606.65 0.00 0.00 3292.01 885.95 12184.84 00:24:24.099 0 00:24:24.099 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:24.099 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:24.099 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:24.099 | .driver_specific 00:24:24.099 | .nvme_error 00:24:24.099 | .status_code 00:24:24.099 | .command_transient_transport_error' 00:24:24.099 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 313 > 0 )) 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1221634 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1221634 ']' 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1221634 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221634 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221634' 00:24:24.356 killing process with pid 1221634 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1221634 00:24:24.356 Received shutdown signal, test time was about 2.000000 seconds 00:24:24.356 00:24:24.356 Latency(us) 00:24:24.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.356 =================================================================================================================== 00:24:24.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.356 17:13:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1221634 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1222044 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1222044 /var/tmp/bperf.sock 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1222044 ']' 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.613 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.613 [2024-07-12 17:13:24.158221] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:24.613 [2024-07-12 17:13:24.158298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222044 ] 00:24:24.613 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.613 [2024-07-12 17:13:24.218673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.871 [2024-07-12 17:13:24.331413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.871 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.871 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:24.871 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.871 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.128 17:13:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.384 nvme0n1 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:25.384 17:13:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.641 Running I/O for 2 seconds... 00:24:25.641 [2024-07-12 17:13:25.160267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed920 00:24:25.641 [2024-07-12 17:13:25.161304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.161338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.171541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190edd58 00:24:25.641 [2024-07-12 17:13:25.172679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.172705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.183970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f2d80 00:24:25.641 [2024-07-12 17:13:25.185275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.185300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.195678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e8d30 00:24:25.641 [2024-07-12 17:13:25.197197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.197221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.206109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e9168 00:24:25.641 [2024-07-12 17:13:25.207376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.207411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.216462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f9b30 00:24:25.641 [2024-07-12 17:13:25.217593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.217617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.227110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e9e10 00:24:25.641 [2024-07-12 17:13:25.227803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.227828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.238326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e5220 00:24:25.641 [2024-07-12 17:13:25.239309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.239334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.249681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ddc00 00:24:25.641 [2024-07-12 17:13:25.250556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.250580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.262272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dfdc0 00:24:25.641 [2024-07-12 17:13:25.263804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.263831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.271528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e99d8 00:24:25.641 [2024-07-12 17:13:25.272566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.272590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.282698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e7818 00:24:25.641 [2024-07-12 17:13:25.283559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.283584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.295354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f3a28 00:24:25.641 [2024-07-12 17:13:25.296981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.297007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.305562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1710 00:24:25.641 [2024-07-12 17:13:25.306826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.306853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.315545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ec840 00:24:25.641 [2024-07-12 17:13:25.317165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.317195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:25.641 [2024-07-12 17:13:25.325082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1b48 00:24:25.641 [2024-07-12 17:13:25.325844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.641 [2024-07-12 17:13:25.325869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.337269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e95a0 00:24:25.899 [2024-07-12 17:13:25.338159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.338187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.348772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e99d8 00:24:25.899 [2024-07-12 17:13:25.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.349582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.360109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f96f8 00:24:25.899 [2024-07-12 17:13:25.361004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.361030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.371555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e8d30 00:24:25.899 [2024-07-12 17:13:25.372635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.381938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fb480 00:24:25.899 [2024-07-12 17:13:25.383008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.393139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fb8b8 00:24:25.899 [2024-07-12 17:13:25.394214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.394240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.404457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e0630 00:24:25.899 [2024-07-12 17:13:25.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.405519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.416937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6020 00:24:25.899 [2024-07-12 17:13:25.418571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.418608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.429226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ddc00 00:24:25.899 [2024-07-12 17:13:25.431084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.431123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.437260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fc998 00:24:25.899 [2024-07-12 17:13:25.438053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.438079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.449262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0788 00:24:25.899 [2024-07-12 17:13:25.450246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.450271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.459848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e49b0 00:24:25.899 [2024-07-12 17:13:25.460620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.899 [2024-07-12 17:13:25.460644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:25.899 [2024-07-12 17:13:25.470666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea680 00:24:25.900 [2024-07-12 17:13:25.471482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.471512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.482665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4b08 00:24:25.900 [2024-07-12 17:13:25.483607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.483632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.494420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7970 00:24:25.900 [2024-07-12 17:13:25.495515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.495545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.506149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7da8 00:24:25.900 [2024-07-12 17:13:25.507382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.507408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.516630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:25.900 [2024-07-12 17:13:25.517499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.517524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.528211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190de038 00:24:25.900 [2024-07-12 17:13:25.528916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.528943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.540077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0ff8 00:24:25.900 [2024-07-12 17:13:25.540835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.551443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:25.900 [2024-07-12 17:13:25.552553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.552578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.562775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e7818 00:24:25.900 [2024-07-12 17:13:25.564029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.564055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.574225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4298 00:24:25.900 [2024-07-12 17:13:25.575464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.575489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:25.900 [2024-07-12 17:13:25.585790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea680 00:24:25.900 [2024-07-12 17:13:25.587004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.900 [2024-07-12 17:13:25.587031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.157 [2024-07-12 17:13:25.598385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e84c0 00:24:26.157 [2024-07-12 17:13:25.599816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.157 [2024-07-12 17:13:25.599844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.157 [2024-07-12 17:13:25.609818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e88f8 00:24:26.157 [2024-07-12 17:13:25.611348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.157 [2024-07-12 17:13:25.611373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.157 [2024-07-12 17:13:25.620422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fb480 00:24:26.157 [2024-07-12 17:13:25.621539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.157 [2024-07-12 17:13:25.621564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.633260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fdeb0 00:24:26.158 [2024-07-12 17:13:25.634945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.634972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.645467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e3d08 00:24:26.158 [2024-07-12 17:13:25.647372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.647407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.653701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f46d0 00:24:26.158 [2024-07-12 17:13:25.654509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.654534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.664743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:26.158 [2024-07-12 17:13:25.665573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.665598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.677680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fac10 00:24:26.158 [2024-07-12 17:13:25.678672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.678697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.689688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0350 00:24:26.158 [2024-07-12 17:13:25.690795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.690821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.700305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0788 00:24:26.158 [2024-07-12 17:13:25.701263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.701288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.710958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:26.158 [2024-07-12 17:13:25.711746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.711772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.724222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e2c28 00:24:26.158 [2024-07-12 17:13:25.725461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.725487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.734910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f3e60 00:24:26.158 [2024-07-12 17:13:25.736003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.736043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.746609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190feb58 00:24:26.158 [2024-07-12 17:13:25.747693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.747733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.758410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6020 00:24:26.158 [2024-07-12 17:13:25.759659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.759684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.769166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6458 00:24:26.158 [2024-07-12 17:13:25.770250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.770275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.780927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fbcf0 00:24:26.158 [2024-07-12 17:13:25.782004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.782045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.792593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e4578 00:24:26.158 [2024-07-12 17:13:25.793803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.793834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.803345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7da8 00:24:26.158 [2024-07-12 17:13:25.804421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.804446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.814018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed0b0 00:24:26.158 [2024-07-12 17:13:25.814963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.814988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.827319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea248 00:24:26.158 [2024-07-12 17:13:25.828614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.828639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.840234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e8088 00:24:26.158 [2024-07-12 17:13:25.842183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.842208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.158 [2024-07-12 17:13:25.848265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e9e10 00:24:26.158 [2024-07-12 17:13:25.849214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.158 [2024-07-12 17:13:25.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.862067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea248 00:24:26.416 [2024-07-12 17:13:25.863161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.863187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.872364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f96f8 00:24:26.416 [2024-07-12 17:13:25.873630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.884452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f46d0 00:24:26.416 [2024-07-12 17:13:25.885651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.885676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.896038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e6fa8 00:24:26.416 [2024-07-12 17:13:25.897124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.897155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.907588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed920 00:24:26.416 [2024-07-12 17:13:25.908924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.908949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.918151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fd208 00:24:26.416 [2024-07-12 17:13:25.919479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.919503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.928654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e4140 00:24:26.416 [2024-07-12 17:13:25.929598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.929624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.939489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190de038 00:24:26.416 [2024-07-12 17:13:25.940374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.416 [2024-07-12 17:13:25.940398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.416 [2024-07-12 17:13:25.952150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4f40 00:24:26.417 [2024-07-12 17:13:25.953224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:25.953250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:25.963882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dece0 00:24:26.417 [2024-07-12 17:13:25.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:25.965098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:25.974446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e84c0 00:24:26.417 [2024-07-12 17:13:25.975487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:25.975511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:25.986290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fb048 00:24:26.417 [2024-07-12 17:13:25.987341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:25.987366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:25.997933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fbcf0 00:24:26.417 [2024-07-12 17:13:25.999122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:25.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.008469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fc128 00:24:26.417 [2024-07-12 17:13:26.009497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.009521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.020193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e12d8 00:24:26.417 [2024-07-12 17:13:26.021234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.021259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.031851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e99d8 00:24:26.417 [2024-07-12 17:13:26.033040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.033064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.042464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e3d08 00:24:26.417 [2024-07-12 17:13:26.043484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.043509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.054171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4298 00:24:26.417 [2024-07-12 17:13:26.055201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.055226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.065855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1710 00:24:26.417 [2024-07-12 17:13:26.067008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.067033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.076371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1b48 00:24:26.417 [2024-07-12 17:13:26.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.088081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea680 00:24:26.417 [2024-07-12 17:13:26.089098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.089123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.417 [2024-07-12 17:13:26.099772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e7c50 00:24:26.417 [2024-07-12 17:13:26.100932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.417 [2024-07-12 17:13:26.100958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.110856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fda78 00:24:26.675 [2024-07-12 17:13:26.111935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.111961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.121820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1b48 00:24:26.675 [2024-07-12 17:13:26.122655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.122679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.135159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed4e8 00:24:26.675 [2024-07-12 17:13:26.136449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.136475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.145707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed0b0 00:24:26.675 [2024-07-12 17:13:26.146856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.146882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.157426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ef270 00:24:26.675 [2024-07-12 17:13:26.158583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.158608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.169217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190eee38 00:24:26.675 [2024-07-12 17:13:26.170505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.170529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.179972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e12d8 00:24:26.675 [2024-07-12 17:13:26.181156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.181182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.191367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e5a90 00:24:26.675 [2024-07-12 17:13:26.192487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.192517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.204076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7da8 00:24:26.675 [2024-07-12 17:13:26.205356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.205382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.215866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ea680 00:24:26.675 [2024-07-12 17:13:26.217336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.217361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.226658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190eaab8 00:24:26.675 [2024-07-12 17:13:26.227974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.228000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.237447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f1430 00:24:26.675 [2024-07-12 17:13:26.238574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.238598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.249137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ec408 00:24:26.675 [2024-07-12 17:13:26.250271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.250297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.259899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e5658 00:24:26.675 [2024-07-12 17:13:26.261011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.261036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.272482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ecc78 00:24:26.675 [2024-07-12 17:13:26.273764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.273803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.284223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e84c0 00:24:26.675 [2024-07-12 17:13:26.285626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.285651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.293447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:26.675 [2024-07-12 17:13:26.294298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.294322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.305186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190edd58 00:24:26.675 [2024-07-12 17:13:26.306175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.315823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e95a0 00:24:26.675 [2024-07-12 17:13:26.316631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.316655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.327625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dece0 00:24:26.675 [2024-07-12 17:13:26.328453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.339286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ef270 00:24:26.675 [2024-07-12 17:13:26.340252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.340277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.675 [2024-07-12 17:13:26.349902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fd640 00:24:26.675 [2024-07-12 17:13:26.350822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.675 [2024-07-12 17:13:26.350847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.676 [2024-07-12 17:13:26.362557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e27f0 00:24:26.676 [2024-07-12 17:13:26.363703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.676 [2024-07-12 17:13:26.363728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.376239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f8e88 00:24:26.933 [2024-07-12 17:13:26.377915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.386824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:26.933 [2024-07-12 17:13:26.388089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.397075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dece0 00:24:26.933 [2024-07-12 17:13:26.398726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.398772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.407526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190de470 00:24:26.933 [2024-07-12 17:13:26.408308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.408332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.417874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ecc78 00:24:26.933 [2024-07-12 17:13:26.418639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.418663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.430177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1b48 00:24:26.933 [2024-07-12 17:13:26.431128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.431152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.441644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7970 00:24:26.933 [2024-07-12 17:13:26.442588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.442612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.453018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e6300 00:24:26.933 [2024-07-12 17:13:26.454092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.454117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.463478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ee5c8 00:24:26.933 [2024-07-12 17:13:26.464504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.464529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.474981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190feb58 00:24:26.933 [2024-07-12 17:13:26.476213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.476238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.486861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4f40 00:24:26.933 [2024-07-12 17:13:26.488230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.488259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.498397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f8a50 00:24:26.933 [2024-07-12 17:13:26.499842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.499867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.509930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fda78 00:24:26.933 [2024-07-12 17:13:26.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.511558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.521418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fef90 00:24:26.933 [2024-07-12 17:13:26.523194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.523219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.529269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f1ca0 00:24:26.933 [2024-07-12 17:13:26.530005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.530029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.540786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6890 00:24:26.933 [2024-07-12 17:13:26.541677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.541700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.552318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e6738 00:24:26.933 [2024-07-12 17:13:26.553354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.562706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fa7d8 00:24:26.933 [2024-07-12 17:13:26.563754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.574223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e0630 00:24:26.933 [2024-07-12 17:13:26.575375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.575400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.585639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f5378 00:24:26.933 [2024-07-12 17:13:26.586944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.586969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.595844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dfdc0 00:24:26.933 [2024-07-12 17:13:26.596768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.596793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.607059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ee5c8 00:24:26.933 [2024-07-12 17:13:26.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.607777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.933 [2024-07-12 17:13:26.618580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ebfd0 00:24:26.933 [2024-07-12 17:13:26.619494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.933 [2024-07-12 17:13:26.619519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.630629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fa3a0 00:24:27.191 [2024-07-12 17:13:26.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.641894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190dece0 00:24:27.191 [2024-07-12 17:13:26.643176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.643200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.651176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ee5c8 00:24:27.191 [2024-07-12 17:13:26.651800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.651828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.663178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0788 00:24:27.191 [2024-07-12 17:13:26.663899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.663927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.674941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e01f8 00:24:27.191 [2024-07-12 17:13:26.675814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.675842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.685494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f5378 00:24:27.191 [2024-07-12 17:13:26.687205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.687231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.695532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1f80 00:24:27.191 [2024-07-12 17:13:26.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.696365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.707850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:27.191 [2024-07-12 17:13:26.708748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.708774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.719222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6020 00:24:27.191 [2024-07-12 17:13:26.720266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.191 [2024-07-12 17:13:26.720290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.191 [2024-07-12 17:13:26.729485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fc560 00:24:27.192 [2024-07-12 17:13:26.730490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.740890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4298 00:24:27.192 [2024-07-12 17:13:26.742072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.742096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.752305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f8618 00:24:27.192 [2024-07-12 17:13:26.753651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.753675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.762574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e01f8 00:24:27.192 [2024-07-12 17:13:26.763513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.763537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.773516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190efae0 00:24:27.192 [2024-07-12 17:13:26.774410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.774439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.784799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190df550 00:24:27.192 [2024-07-12 17:13:26.785490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.797251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e2c28 00:24:27.192 [2024-07-12 17:13:26.798730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.806654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ed920 00:24:27.192 [2024-07-12 17:13:26.807601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.807625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.816862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f6890 00:24:27.192 [2024-07-12 17:13:26.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.817810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.829225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4298 00:24:27.192 [2024-07-12 17:13:26.830311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.830336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.840600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4b08 00:24:27.192 [2024-07-12 17:13:26.841783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.841808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.850903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f46d0 00:24:27.192 [2024-07-12 17:13:26.851976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.852002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.861290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fac10 00:24:27.192 [2024-07-12 17:13:26.862212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.862236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.872595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e6b70 00:24:27.192 [2024-07-12 17:13:26.873542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.873567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.192 [2024-07-12 17:13:26.883407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0350 00:24:27.192 [2024-07-12 17:13:26.884391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.192 [2024-07-12 17:13:26.884415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.896438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f0bc0 00:24:27.450 [2024-07-12 17:13:26.897515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.906721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f4b08 00:24:27.450 [2024-07-12 17:13:26.907800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.907825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.918923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f1430 00:24:27.450 [2024-07-12 17:13:26.920178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.920203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.930353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7538 00:24:27.450 [2024-07-12 17:13:26.931684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.931708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.940653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:27.450 [2024-07-12 17:13:26.941965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.951330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e0630 00:24:27.450 [2024-07-12 17:13:26.952387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.952411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.961999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:27.450 [2024-07-12 17:13:26.962628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.962652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.973211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fa3a0 00:24:27.450 [2024-07-12 17:13:26.974136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.974160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.983408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e3d08 00:24:27.450 [2024-07-12 17:13:26.984355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.984380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:26.995684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:27.450 [2024-07-12 17:13:26.996750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:26.996775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.007064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f2948 00:24:27.450 [2024-07-12 17:13:27.008278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.008302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.017341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f2510 00:24:27.450 [2024-07-12 17:13:27.018385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.018409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.027649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f1868 00:24:27.450 [2024-07-12 17:13:27.028573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.028597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.038994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fe2e8 00:24:27.450 [2024-07-12 17:13:27.039904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.039930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.051611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fdeb0 00:24:27.450 [2024-07-12 17:13:27.052697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.062815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e3d08 00:24:27.450 [2024-07-12 17:13:27.064161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.064189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.074264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f8618 00:24:27.450 [2024-07-12 17:13:27.075717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.075748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.084494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7da8 00:24:27.450 [2024-07-12 17:13:27.085798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.085823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.093810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190fc560 00:24:27.450 [2024-07-12 17:13:27.094590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.094614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.105965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e49b0 00:24:27.450 [2024-07-12 17:13:27.107785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.117383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f7100 00:24:27.450 [2024-07-12 17:13:27.118733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.118764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.130138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190f1ca0 00:24:27.450 [2024-07-12 17:13:27.131949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.131974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.450 [2024-07-12 17:13:27.139308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190e1b48 00:24:27.450 [2024-07-12 17:13:27.140252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.450 [2024-07-12 17:13:27.140277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.707 [2024-07-12 17:13:27.151410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25d30d0) with pdu=0x2000190ff3c8 00:24:27.708 [2024-07-12 17:13:27.152610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.708 [2024-07-12 17:13:27.152633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.708 00:24:27.708 Latency(us) 00:24:27.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:27.708 nvme0n1 : 2.01 22683.14 88.61 0.00 0.00 5634.63 2754.94 13981.01 00:24:27.708 =================================================================================================================== 00:24:27.708 Total : 22683.14 88.61 0.00 0.00 5634.63 2754.94 13981.01 00:24:27.708 0 00:24:27.708 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.708 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.708 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.708 | .driver_specific 00:24:27.708 | .nvme_error 00:24:27.708 | .status_code 00:24:27.708 | .command_transient_transport_error' 00:24:27.708 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 178 > 0 )) 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1222044 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1222044 ']' 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1222044 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222044 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222044' 00:24:27.965 killing process with pid 1222044 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1222044 00:24:27.965 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.965 00:24:27.965 Latency(us) 00:24:27.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.965 =================================================================================================================== 00:24:27.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.965 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1222044 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1222538 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1222538 /var/tmp/bperf.sock 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1222538 ']' 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.222 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.222 [2024-07-12 17:13:27.723812] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:28.222 [2024-07-12 17:13:27.723892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222538 ] 00:24:28.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.222 Zero copy mechanism will not be used. 00:24:28.222 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.222 [2024-07-12 17:13:27.782893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.222 [2024-07-12 17:13:27.891045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.479 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.479 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:28.479 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.479 17:13:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.736 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.993 nvme0n1 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:28.993 17:13:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:29.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.251 Zero copy mechanism will not be used. 00:24:29.251 Running I/O for 2 seconds... 00:24:29.251 [2024-07-12 17:13:28.776543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.776866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.776904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.251 [2024-07-12 17:13:28.784915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.785204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.785233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.251 [2024-07-12 17:13:28.792451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.792728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.792778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.251 [2024-07-12 17:13:28.800939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.801234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.801262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.251 [2024-07-12 17:13:28.810095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.810370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.810404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.251 [2024-07-12 17:13:28.817924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.251 [2024-07-12 17:13:28.818234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.251 [2024-07-12 17:13:28.818262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.824962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.825281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.825309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.831709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.832023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.837549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.837851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.837879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.844151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.844426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.844459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.851549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.851878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.857922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.858204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.858231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.864338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.864614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.870680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.870986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.871013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.877065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.877341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.877368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.883229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.883505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.883532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.889359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.889626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.889653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.895948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.896243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.896270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.901848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.902213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.907864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.908153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.908180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.913556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.913885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.919389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.919705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.919755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.925830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.926139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.926165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.932352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.932629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.932656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.938644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.938969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.938998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.252 [2024-07-12 17:13:28.944687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.252 [2024-07-12 17:13:28.945029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.252 [2024-07-12 17:13:28.945059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.951598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.951967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.951994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.958172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.958451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.958478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.965275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.965553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.965580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.972823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.973123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.973150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.980915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.981265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.988638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.988962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.988991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:28.996654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:28.996959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:28.996987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.003342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.003619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.003645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.010101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.010314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.010340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.017030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.017343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.023397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.023660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.023686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.029695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.030007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.036433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.036733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.036767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.042631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.042961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.048954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.049266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.049292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.055307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.055617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.062814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.063098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.063125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.069775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.070082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.070109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.077989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.078300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.511 [2024-07-12 17:13:29.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.511 [2024-07-12 17:13:29.084635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.511 [2024-07-12 17:13:29.084948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.084976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.091108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.091393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.091427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.097384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.097668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.097694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.103561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.103891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.103919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.110555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.110862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.110889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.117806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.118124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.124344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.124644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.124670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.130614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.130927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.137009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.137341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.137367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.143647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.144009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.144036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.150960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.151256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.151282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.158491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.158804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.158831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.165327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.165617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.165643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.171889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.172199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.178886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.179188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.179214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.185484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.185799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.185826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.191977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.192258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.192289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.512 [2024-07-12 17:13:29.199066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.512 [2024-07-12 17:13:29.199358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.512 [2024-07-12 17:13:29.199384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.206410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.213180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.213521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.213546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.219971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.220281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.220306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.227246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.227590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.227616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.234918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.235228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.235254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.242047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.242334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.248640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.248952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.248980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.254490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.254804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.254832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.260304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.260589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.266387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.266684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.266711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.771 [2024-07-12 17:13:29.272895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.771 [2024-07-12 17:13:29.273269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.771 [2024-07-12 17:13:29.273296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.279436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.279817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.286474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.286804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.286833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.292834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.293166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.293193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.299511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.299821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.299849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.306454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.306779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.306807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.312807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.313125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.313152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.319181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.319478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.319504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.325401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.325696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.325722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.331637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.331965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.331993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.337990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.338290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.338317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.344341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.344639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.344666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.350573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.350949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.350991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.357147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.357499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.363603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.363923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.363960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.370124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.370420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.370447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.376550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.376929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.376972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.383135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.383446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.383473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.389543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.389867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.389895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.396123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.396419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.396446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.402436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.402751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.402779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.408927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.409240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.409267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.415343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.415730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.415765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.421821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.422127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.422154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.428352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.428766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.435113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.435463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.435489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.441663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.442036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.442064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.448360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.448713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.455095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.772 [2024-07-12 17:13:29.455394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.772 [2024-07-12 17:13:29.455421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.772 [2024-07-12 17:13:29.461853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:29.773 [2024-07-12 17:13:29.462184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.773 [2024-07-12 17:13:29.462211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.031 [2024-07-12 17:13:29.469000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.031 [2024-07-12 17:13:29.469306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.031 [2024-07-12 17:13:29.469334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.031 [2024-07-12 17:13:29.475504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.475818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.475852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.482143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.482440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.482467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.489924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.490255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.490282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.497059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.497366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.503680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.503981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.504010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.510369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.510748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.510775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.517082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.517384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.523653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.523979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.524007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.531258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.531628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.531656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.539173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.539489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.539517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.545817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.546134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.546161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.552573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.552898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.559271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.559568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.565747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.566066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.566093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.572427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.572750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.572777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.579101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.579403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.579430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.585534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.585856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.585884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.592180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.592478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.592504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.598707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.599105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.599131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.605421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.605800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.605843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.612112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.612408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.612434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.618539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.618861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.618889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.625050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.625347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.625373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.631529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.631862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.631889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.638172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.638468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.638494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.644783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.645103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.645130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.651349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.651644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.651677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.658790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.032 [2024-07-12 17:13:29.659166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.032 [2024-07-12 17:13:29.659206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.032 [2024-07-12 17:13:29.666689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.667042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.673534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.673859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.673887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.680479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.680820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.680848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.686708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.687037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.687066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.692511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.692837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.692866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.698583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.698922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.698951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.705413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.705699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.705748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.713608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.713995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.714041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.033 [2024-07-12 17:13:29.721323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.033 [2024-07-12 17:13:29.721747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.033 [2024-07-12 17:13:29.721790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.729126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.729510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.735782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.736106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.736133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.742380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.742690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.749398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.749512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.749540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.757091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.757422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.757450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.766044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.766329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.766356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.774433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.774832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.774874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.782133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.782510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.782537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.789141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.789474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.797251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.797562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.797589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.804660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.805071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.813244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.813537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.813566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.821423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.821720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.821768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.829618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.829946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.829974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.836941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.837298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.837326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.844542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.844883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.844918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.851095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.858170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.858472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.858499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.864935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.865296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.865324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.871794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.872105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.872133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.878500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.878823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.291 [2024-07-12 17:13:29.878853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.291 [2024-07-12 17:13:29.885352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.291 [2024-07-12 17:13:29.885645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.885673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.892098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.892372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.892399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.898825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.899005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.899046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.906647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.906854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.906882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.914931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.915246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.915274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.923509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.923823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.923852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.931404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.931702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.931760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.939521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.939830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.939860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.945825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.946133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.951549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.951860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.951888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.957556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.957871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.957899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.963685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.964014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.964050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.969985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.970289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.975651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.975963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.975991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.292 [2024-07-12 17:13:29.981404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.292 [2024-07-12 17:13:29.981749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.292 [2024-07-12 17:13:29.981778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:29.987398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:29.987711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:29.987748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:29.993977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:29.994324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:29.994351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.000667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.000978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.001007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.007232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.007580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.007613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.015004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.015343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.015381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.022038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.022441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.022474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.029798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.030107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.030141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.036267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.036595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.036627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.043123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.043520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.043550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.049991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.050312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.050343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.056793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.057130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.057179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.065365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.065671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.065700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.072145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.072533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.078638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.078979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.079009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.085172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.085564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.091520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.550 [2024-07-12 17:13:30.091866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.550 [2024-07-12 17:13:30.091895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.550 [2024-07-12 17:13:30.097936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.098278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.104563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.104917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.104946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.111266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.111621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.118762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.119083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.119113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.125936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.126244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.126271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.132462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.132792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.132821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.140207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.140552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.147260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.147562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.153513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.153875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.153904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.159920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.160280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.160307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.166238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.166557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.166584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.172293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.172619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.172648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.179226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.179522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.179560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.186340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.186594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.186636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.192994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.193276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.199821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.200149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.200177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.206309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.206590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.206617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.212861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.213145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.213173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.219447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.219748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.219787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.226309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.226576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.226603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.232592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.232914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.551 [2024-07-12 17:13:30.239921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.551 [2024-07-12 17:13:30.240286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.551 [2024-07-12 17:13:30.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.247969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.248273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.248302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.255479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.255818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.263805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.264117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.264144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.271215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.271511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.271537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.279414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.279834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.287144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.287402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.287444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.293768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.294053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.294080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.300025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.300297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.300323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.306328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.306588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.306614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.312569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.312856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.318874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.319151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.319182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.325664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.325948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.325975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.333511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.333821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.333848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.341822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.342160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.342200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.350002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.350302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.350328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.356658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.356942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.356969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.363140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.363405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.363431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.371128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.371477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.371502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.378609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.378934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.378961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.386376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.386723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.394176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.394440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.810 [2024-07-12 17:13:30.394466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.810 [2024-07-12 17:13:30.401335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.810 [2024-07-12 17:13:30.401595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.401622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.408530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.408843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.416005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.416253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.416279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.423220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.423480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.423506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.430347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.430610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.430635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.437487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.437801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.437828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.444408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.444670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.444695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.450860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.451144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.451170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.458195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.458460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.458485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.465209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.465480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.465506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.471577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.471864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.471891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.477758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.478033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.478059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.483978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.484261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.484287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.490328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.490586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.490612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.496524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:30.811 [2024-07-12 17:13:30.496806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.811 [2024-07-12 17:13:30.496833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.811 [2024-07-12 17:13:30.503313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.503600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.503628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.510322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.510582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.510607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.517851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.518132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.518158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.524945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.525228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.525254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.532091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.532352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.532378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.539158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.539433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.539460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.547425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.547700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.547750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.554199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.554461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.554486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.560400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.560663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.560689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.566493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.566778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.566804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.572758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.573039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.573067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.579103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.579366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.585207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.585471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.585496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.591468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.591750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-07-12 17:13:30.591777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.069 [2024-07-12 17:13:30.597813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.069 [2024-07-12 17:13:30.598101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.598127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.604079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.604341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.604366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.610144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.610404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.610430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.616137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.622312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.622576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.622602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.628372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.628676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.634542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.634873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.640679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.640966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.640992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.646978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.647260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.647286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.653921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.654220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.654246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.660956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.661234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.661259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.668152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.668412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.675783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.676058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.676085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.682941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.683218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.683243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.690468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.690773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.690815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.696484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.696759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.696787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.702544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.703096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.703122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.709379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.709651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.709678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.715040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.715289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.715315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.720566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.720856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.720884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.726187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.726437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.726463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.732108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.732356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.732382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.738187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.738448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.738474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.744252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.744502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.744528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.751413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.751698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.070 [2024-07-12 17:13:30.757071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.070 [2024-07-12 17:13:30.757328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-07-12 17:13:30.757355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.328 [2024-07-12 17:13:30.763506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.328 [2024-07-12 17:13:30.763815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.328 [2024-07-12 17:13:30.763845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.328 [2024-07-12 17:13:30.769844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x26c82e0) with pdu=0x2000190fef90 00:24:31.328 [2024-07-12 17:13:30.769992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.328 [2024-07-12 17:13:30.770032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.328 00:24:31.328 Latency(us) 00:24:31.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.328 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:31.328 nvme0n1 : 2.00 4536.82 567.10 0.00 0.00 3518.65 2621.44 10145.94 00:24:31.328 =================================================================================================================== 00:24:31.328 Total : 4536.82 567.10 0.00 0.00 3518.65 2621.44 10145.94 00:24:31.328 0 00:24:31.328 17:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:31.328 17:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:31.328 17:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:31.328 | .driver_specific 00:24:31.328 | .nvme_error 00:24:31.328 | .status_code 00:24:31.328 | .command_transient_transport_error' 00:24:31.328 17:13:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:31.585 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 293 > 0 )) 00:24:31.585 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1222538 00:24:31.585 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1222538 ']' 00:24:31.585 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1222538 00:24:31.585 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222538 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222538' 00:24:31.586 killing process with pid 1222538 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1222538 00:24:31.586 Received shutdown signal, test time was about 2.000000 seconds 00:24:31.586 00:24:31.586 Latency(us) 00:24:31.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.586 =================================================================================================================== 00:24:31.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.586 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1222538 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1221157 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1221157 ']' 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1221157 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1221157 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1221157' 00:24:31.843 killing process with pid 1221157 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1221157 00:24:31.843 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1221157 00:24:32.101 00:24:32.101 real 0m15.270s 00:24:32.101 user 0m29.322s 00:24:32.101 sys 0m5.182s 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.101 ************************************ 00:24:32.101 END TEST nvmf_digest_error 00:24:32.101 ************************************ 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.101 rmmod nvme_tcp 00:24:32.101 rmmod nvme_fabrics 00:24:32.101 rmmod nvme_keyring 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1221157 ']' 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1221157 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1221157 ']' 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1221157 00:24:32.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1221157) - No such process 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1221157 is not found' 00:24:32.101 Process with pid 1221157 is not found 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.101 17:13:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.631 17:13:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.631 00:24:34.631 real 0m35.183s 00:24:34.631 user 1m0.062s 00:24:34.631 sys 0m11.850s 00:24:34.631 17:13:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.631 17:13:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.631 ************************************ 00:24:34.631 END TEST nvmf_digest 00:24:34.631 ************************************ 00:24:34.632 17:13:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:34.632 17:13:33 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:34.632 17:13:33 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:34.632 17:13:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:34.632 17:13:33 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.632 17:13:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.632 17:13:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.632 17:13:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.632 ************************************ 00:24:34.632 START TEST nvmf_bdevperf 00:24:34.632 ************************************ 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.632 * Looking for test storage... 00:24:34.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.632 17:13:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:36.529 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:36.529 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.529 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:36.530 Found net devices under 0000:84:00.0: cvl_0_0 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:36.530 Found net devices under 0000:84:00.1: cvl_0_1 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.530 17:13:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:24:36.530 00:24:36.530 --- 10.0.0.2 ping statistics --- 00:24:36.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.530 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:24:36.530 00:24:36.530 --- 10.0.0.1 ping statistics --- 00:24:36.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.530 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1224936 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1224936 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1224936 ']' 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.530 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.530 [2024-07-12 17:13:36.121437] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:36.530 [2024-07-12 17:13:36.121504] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.530 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.530 [2024-07-12 17:13:36.183861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.787 [2024-07-12 17:13:36.286713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.787 [2024-07-12 17:13:36.286789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.787 [2024-07-12 17:13:36.286812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.787 [2024-07-12 17:13:36.286823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.787 [2024-07-12 17:13:36.286833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.787 [2024-07-12 17:13:36.286919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.787 [2024-07-12 17:13:36.286979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.787 [2024-07-12 17:13:36.286982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.787 [2024-07-12 17:13:36.436198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.787 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.787 Malloc0 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.044 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.044 [2024-07-12 17:13:36.501172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.045 { 00:24:37.045 "params": { 00:24:37.045 "name": "Nvme$subsystem", 00:24:37.045 "trtype": "$TEST_TRANSPORT", 00:24:37.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.045 "adrfam": "ipv4", 00:24:37.045 "trsvcid": "$NVMF_PORT", 00:24:37.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.045 "hdgst": ${hdgst:-false}, 00:24:37.045 "ddgst": ${ddgst:-false} 00:24:37.045 }, 00:24:37.045 "method": "bdev_nvme_attach_controller" 00:24:37.045 } 00:24:37.045 EOF 00:24:37.045 )") 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:37.045 17:13:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:37.045 "params": { 00:24:37.045 "name": "Nvme1", 00:24:37.045 "trtype": "tcp", 00:24:37.045 "traddr": "10.0.0.2", 00:24:37.045 "adrfam": "ipv4", 00:24:37.045 "trsvcid": "4420", 00:24:37.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.045 "hdgst": false, 00:24:37.045 "ddgst": false 00:24:37.045 }, 00:24:37.045 "method": "bdev_nvme_attach_controller" 00:24:37.045 }' 00:24:37.045 [2024-07-12 17:13:36.550386] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:37.045 [2024-07-12 17:13:36.550449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224958 ] 00:24:37.045 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.045 [2024-07-12 17:13:36.609344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.045 [2024-07-12 17:13:36.722360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.302 Running I/O for 1 seconds... 00:24:38.711 00:24:38.711 Latency(us) 00:24:38.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.711 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:38.711 Verification LBA range: start 0x0 length 0x4000 00:24:38.711 Nvme1n1 : 1.01 8917.06 34.83 0.00 0.00 14293.89 2839.89 15340.28 00:24:38.711 =================================================================================================================== 00:24:38.711 Total : 8917.06 34.83 0.00 0.00 14293.89 2839.89 15340.28 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1225222 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:38.711 { 00:24:38.711 "params": { 00:24:38.711 "name": "Nvme$subsystem", 00:24:38.711 "trtype": "$TEST_TRANSPORT", 00:24:38.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:38.711 "adrfam": "ipv4", 00:24:38.711 "trsvcid": "$NVMF_PORT", 00:24:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:38.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:38.711 "hdgst": ${hdgst:-false}, 00:24:38.711 "ddgst": ${ddgst:-false} 00:24:38.711 }, 00:24:38.711 "method": "bdev_nvme_attach_controller" 00:24:38.711 } 00:24:38.711 EOF 00:24:38.711 )") 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:38.711 17:13:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:38.711 "params": { 00:24:38.711 "name": "Nvme1", 00:24:38.711 "trtype": "tcp", 00:24:38.711 "traddr": "10.0.0.2", 00:24:38.711 "adrfam": "ipv4", 00:24:38.711 "trsvcid": "4420", 00:24:38.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.711 "hdgst": false, 00:24:38.711 "ddgst": false 00:24:38.711 }, 00:24:38.711 "method": "bdev_nvme_attach_controller" 00:24:38.711 }' 00:24:38.711 [2024-07-12 17:13:38.268640] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:38.711 [2024-07-12 17:13:38.268744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225222 ] 00:24:38.711 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.711 [2024-07-12 17:13:38.329447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.968 [2024-07-12 17:13:38.441668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.224 Running I/O for 15 seconds... 00:24:41.753 17:13:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1224936 00:24:41.753 17:13:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:41.753 [2024-07-12 17:13:41.236885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.236938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.236970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.236988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.753 [2024-07-12 17:13:41.237433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.753 [2024-07-12 17:13:41.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.237976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.237992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.754 [2024-07-12 17:13:41.238601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.754 [2024-07-12 17:13:41.238614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.755 [2024-07-12 17:13:41.238864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.238982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.238997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.755 [2024-07-12 17:13:41.239789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.755 [2024-07-12 17:13:41.239804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.239979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.239993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.756 [2024-07-12 17:13:41.240601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca9e60 is same with the state(5) to be set 00:24:41.756 [2024-07-12 17:13:41.240628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.756 [2024-07-12 17:13:41.240639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.756 [2024-07-12 17:13:41.240650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:24:41.756 [2024-07-12 17:13:41.240662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.756 [2024-07-12 17:13:41.240718] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca9e60 was disconnected and freed. reset controller. 00:24:41.756 [2024-07-12 17:13:41.243824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.756 [2024-07-12 17:13:41.243894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.756 [2024-07-12 17:13:41.244506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.756 [2024-07-12 17:13:41.244558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.756 [2024-07-12 17:13:41.244574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.756 [2024-07-12 17:13:41.244937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.756 [2024-07-12 17:13:41.245154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.756 [2024-07-12 17:13:41.245173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.756 [2024-07-12 17:13:41.245192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.756 [2024-07-12 17:13:41.248195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.756 [2024-07-12 17:13:41.257341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.756 [2024-07-12 17:13:41.257699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.756 [2024-07-12 17:13:41.257755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.756 [2024-07-12 17:13:41.257770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.756 [2024-07-12 17:13:41.257985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.756 [2024-07-12 17:13:41.258213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.756 [2024-07-12 17:13:41.258233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.756 [2024-07-12 17:13:41.258245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.756 [2024-07-12 17:13:41.261236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.756 [2024-07-12 17:13:41.270416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.756 [2024-07-12 17:13:41.270761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.270789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.270804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.270993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.271197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.271217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.271229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.274093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.283518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.283853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.283879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.283893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.284097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.284285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.284304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.284316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.287178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.296602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.296944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.296970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.296984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.297184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.297373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.297392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.297404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.300308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.310168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.310563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.310589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.310604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.310855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.311089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.311125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.311141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.314419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.323789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.324196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.324222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.324237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.324461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.324666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.324687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.324700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.328018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.337301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.337676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.337702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.337717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.337960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.338196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.338217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.338230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.341503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.350659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.351055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.351080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.351094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.351278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.351465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.351484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.351496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.354481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.364077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.364424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.364449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.364464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.364673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.364912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.364936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.364950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.368220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-12 17:13:41.377663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-12 17:13:41.378034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-12 17:13:41.378076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-12 17:13:41.378092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.757 [2024-07-12 17:13:41.378302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.757 [2024-07-12 17:13:41.378501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-12 17:13:41.378521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-12 17:13:41.378538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-12 17:13:41.381796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-12 17:13:41.391315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-12 17:13:41.391640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-12 17:13:41.391666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-12 17:13:41.391682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.758 [2024-07-12 17:13:41.391919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.758 [2024-07-12 17:13:41.392147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-12 17:13:41.392167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-12 17:13:41.392180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-12 17:13:41.395353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-12 17:13:41.404533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-12 17:13:41.404868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-12 17:13:41.404897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-12 17:13:41.404913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.758 [2024-07-12 17:13:41.405131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.758 [2024-07-12 17:13:41.405319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-12 17:13:41.405338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-12 17:13:41.405350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-12 17:13:41.408336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-12 17:13:41.417920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-12 17:13:41.418292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-12 17:13:41.418317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-12 17:13:41.418330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.758 [2024-07-12 17:13:41.418513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.758 [2024-07-12 17:13:41.418701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-12 17:13:41.418735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-12 17:13:41.418758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-12 17:13:41.421761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-12 17:13:41.431074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-12 17:13:41.431411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-12 17:13:41.431439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-12 17:13:41.431454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:41.758 [2024-07-12 17:13:41.431638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:41.758 [2024-07-12 17:13:41.431854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-12 17:13:41.431874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-12 17:13:41.431887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-12 17:13:41.434733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.444712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.445098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.445150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.445166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.445413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.445644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.445664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.445676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.448787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.457826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.458175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.458226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.458240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.458423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.458611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.458630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.458643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.461508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.470884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.471221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.471274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.471289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.471472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.471664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.471683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.471696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.474565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.484007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.484357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.484408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.484422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.484607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.484821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.484842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.484855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.487657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.497782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.498156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.498193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.498224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.498451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.498667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.498688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.498701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.502100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.510957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.511282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.511307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.511321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.511504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.511692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.511710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.511746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.514595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.524058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.524365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.524391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.524405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.524589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.524803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.524824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.524836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.527672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.537244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.016 [2024-07-12 17:13:41.537571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.016 [2024-07-12 17:13:41.537595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.016 [2024-07-12 17:13:41.537609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.016 [2024-07-12 17:13:41.537817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.016 [2024-07-12 17:13:41.538010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.016 [2024-07-12 17:13:41.538029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.016 [2024-07-12 17:13:41.538056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.016 [2024-07-12 17:13:41.540983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.016 [2024-07-12 17:13:41.550446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.550770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.550796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.550810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.550999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.551202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.551221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.551233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.554019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.563403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.563728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.563773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.563792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.563984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.564188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.564207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.564220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.567084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.576554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.576892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.576918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.576933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.577133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.577321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.577340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.577352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.580214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.589629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.589958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.589983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.589998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.590197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.590385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.590404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.590417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.593280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.602883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.603226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.603274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.603288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.603472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.603659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.603682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.603695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.606560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.616094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.616522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.616572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.616586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.616795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.616988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.617008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.617021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.619877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.629316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.629747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.629773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.629787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.629977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.630179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.630198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.630211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.633074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.642489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.642832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.642857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.642872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.643056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.643243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.017 [2024-07-12 17:13:41.643262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.017 [2024-07-12 17:13:41.643274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.017 [2024-07-12 17:13:41.646139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.017 [2024-07-12 17:13:41.655599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.017 [2024-07-12 17:13:41.655927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.017 [2024-07-12 17:13:41.655953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.017 [2024-07-12 17:13:41.655968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.017 [2024-07-12 17:13:41.656169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.017 [2024-07-12 17:13:41.656356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.018 [2024-07-12 17:13:41.656375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.018 [2024-07-12 17:13:41.656388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.018 [2024-07-12 17:13:41.659260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.018 [2024-07-12 17:13:41.668676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.018 [2024-07-12 17:13:41.668991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.018 [2024-07-12 17:13:41.669016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.018 [2024-07-12 17:13:41.669031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.018 [2024-07-12 17:13:41.669229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.018 [2024-07-12 17:13:41.669417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.018 [2024-07-12 17:13:41.669436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.018 [2024-07-12 17:13:41.669448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.018 [2024-07-12 17:13:41.672324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.018 [2024-07-12 17:13:41.681770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.018 [2024-07-12 17:13:41.682130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.018 [2024-07-12 17:13:41.682155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.018 [2024-07-12 17:13:41.682170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.018 [2024-07-12 17:13:41.682353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.018 [2024-07-12 17:13:41.682540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.018 [2024-07-12 17:13:41.682559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.018 [2024-07-12 17:13:41.682571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.018 [2024-07-12 17:13:41.685435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.018 [2024-07-12 17:13:41.694909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.018 [2024-07-12 17:13:41.695277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.018 [2024-07-12 17:13:41.695302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.018 [2024-07-12 17:13:41.695317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.018 [2024-07-12 17:13:41.695506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.018 [2024-07-12 17:13:41.695693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.018 [2024-07-12 17:13:41.695711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.018 [2024-07-12 17:13:41.695748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.018 [2024-07-12 17:13:41.698600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.018 [2024-07-12 17:13:41.708542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.708969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.708997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.709028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.709241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.709474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.709495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.709508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.712455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.721694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.722053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.722079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.722093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.722281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.722479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.722497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.722510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.725372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.734826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.735224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.735248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.735262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.735446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.735644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.735672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.735688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.738549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.747938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.748246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.748270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.748285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.748468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.748665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.748683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.748696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.751597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.761058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.761349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.761374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.761388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.761572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.761784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.761805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.761817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.764659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.774157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.774466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.774490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.774504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.774687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.774908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.774928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.774941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.777799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.787228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.787595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.787619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.787632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.787853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.788061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.788080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.788092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.790934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.800371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.800733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.800775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.800790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.800979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.801180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.801199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.801211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.804001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.813517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.813867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.813893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.813907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.814110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.277 [2024-07-12 17:13:41.814308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.277 [2024-07-12 17:13:41.814327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.277 [2024-07-12 17:13:41.814339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.277 [2024-07-12 17:13:41.817202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.277 [2024-07-12 17:13:41.826712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.277 [2024-07-12 17:13:41.827057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.277 [2024-07-12 17:13:41.827083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.277 [2024-07-12 17:13:41.827097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.277 [2024-07-12 17:13:41.827310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.827528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.827548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.827561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.830675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.839939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.840353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.840404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.840418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.840601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.840825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.840847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.840862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.843841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.853237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.853604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.853629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.853644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.853863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.854090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.854110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.854123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.857047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.866354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.866745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.866786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.866801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.866990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.867194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.867215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.867228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.870100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.879451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.879808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.879833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.879848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.880032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.880219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.880238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.880251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.883121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.892535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.892946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.892971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.892986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.893186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.893374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.893395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.893408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.896235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.905631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.905990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.906025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.906040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.906223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.906410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.906430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.906443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.909234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.918832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.919224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.919253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.919269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.919453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.919641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.919661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.919674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.922558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.932054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.932473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.932498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.932512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.932697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.932894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.932913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.932927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.935769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.945212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.945581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.945617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.945631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.945845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.946053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.946073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.946086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.948911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.278 [2024-07-12 17:13:41.958215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.278 [2024-07-12 17:13:41.958611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.278 [2024-07-12 17:13:41.958646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.278 [2024-07-12 17:13:41.958661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.278 [2024-07-12 17:13:41.958876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.278 [2024-07-12 17:13:41.959088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.278 [2024-07-12 17:13:41.959109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.278 [2024-07-12 17:13:41.959121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.278 [2024-07-12 17:13:41.961967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.538 [2024-07-12 17:13:41.971849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.538 [2024-07-12 17:13:41.972280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.538 [2024-07-12 17:13:41.972305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.538 [2024-07-12 17:13:41.972320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.538 [2024-07-12 17:13:41.972504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.538 [2024-07-12 17:13:41.972690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:41.972709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:41.972735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:41.975985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:41.984847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:41.985299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:41.985325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:41.985339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:41.985524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:41.985711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:41.985730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:41.985754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:41.988573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:41.998229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:41.998664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:41.998711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:41.998726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:41.998942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:41.999154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:41.999175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:41.999188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.002149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.011522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.011938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.011990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.012004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.012205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.012392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.012411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.012424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.015293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.024690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.025102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.025127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.025142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.025325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.025512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.025543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.025556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.028426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.037873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.038250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.038275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.038289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.038472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.038659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.038678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.038690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.041520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.051099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.051475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.051501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.051520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.051705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.051923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.051945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.051958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.054819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.064142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.064557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.064582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.064597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.064811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.065004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.065024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.065038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.067903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.077347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.077773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.077799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.077814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.078003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.078206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.078225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.078237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.081028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.090338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.090784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.090809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.090823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.091007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.091194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.091216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.091229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.094024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.103382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.103800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.103825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.103839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.539 [2024-07-12 17:13:42.104023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.539 [2024-07-12 17:13:42.104211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.539 [2024-07-12 17:13:42.104229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.539 [2024-07-12 17:13:42.104241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.539 [2024-07-12 17:13:42.107100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.539 [2024-07-12 17:13:42.116573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.539 [2024-07-12 17:13:42.117010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.539 [2024-07-12 17:13:42.117060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.539 [2024-07-12 17:13:42.117074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.117258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.117445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.117463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.117476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.120308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.129584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.130022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.130073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.130087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.130271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.130457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.130476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.130488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.133356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.142792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.143221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.143246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.143260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.143444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.143630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.143649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.143661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.146490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.155880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.156230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.156255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.156269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.156452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.156639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.156658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.156670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.159497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.169119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.169517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.169542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.169556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.169766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.169960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.169979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.169992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.172839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.182169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.182585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.182639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.182653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.182852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.183040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.183059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.183072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.185954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.195224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.195655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.195703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.195717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.195910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.196099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.196117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.196130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.198979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.208413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.208826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.208853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.208868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.209077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.209270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.209290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.209304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.212163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.540 [2024-07-12 17:13:42.221390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.540 [2024-07-12 17:13:42.221803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.540 [2024-07-12 17:13:42.221830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.540 [2024-07-12 17:13:42.221845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.540 [2024-07-12 17:13:42.222063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.540 [2024-07-12 17:13:42.222267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.540 [2024-07-12 17:13:42.222288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.540 [2024-07-12 17:13:42.222305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.540 [2024-07-12 17:13:42.225178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.234493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.234882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.234908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.234924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.235126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.235313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.235333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.235347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.238583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.247673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.248081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.248129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.248144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.248327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.248515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.248533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.248545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.251723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.260991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.261436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.261484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.261498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.261688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.262087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.262109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.262122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.264910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.273997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.274402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.274428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.274443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.274628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.274846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.274867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.274880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.277724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.287185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.287582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.287607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.287621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.287834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.288042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.288061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.288073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.290858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.300286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.300689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.300736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.300759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.300944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.301130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.301149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.301161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.303912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.799 [2024-07-12 17:13:42.313299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.799 [2024-07-12 17:13:42.313688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.799 [2024-07-12 17:13:42.313747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.799 [2024-07-12 17:13:42.313764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.799 [2024-07-12 17:13:42.313949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.799 [2024-07-12 17:13:42.314139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.799 [2024-07-12 17:13:42.314159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.799 [2024-07-12 17:13:42.314172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.799 [2024-07-12 17:13:42.316920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.326315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.326729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.326786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.326800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.326984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.327171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.327189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.327202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.329954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.339345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.339725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.339782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.339796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.339981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.340168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.340187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.340199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.342990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.352374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.352733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.352781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.352800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.352985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.353172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.353191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.353204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.355963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.365359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.365789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.365819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.365833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.366017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.366204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.366222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.366234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.368986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.378388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.378755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.378781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.378795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.378979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.379166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.379184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.379196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.382064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.391478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.391876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.391902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.391916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.392118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.392305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.392323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.392335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.395290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.405052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.405497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.405546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.405565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.405788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.406006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.406046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.406061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.409263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.418347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.418670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.418695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.418710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.418945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.419177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.419197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.419209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.422288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.431936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.432344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.432369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.432383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.432567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.432798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.432820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.432834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.435833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.445128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.445524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.445575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.445589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.445787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.445985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.446005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.446018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.448885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.458339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.458804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.458831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.458846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.459035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.459246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.459266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.800 [2024-07-12 17:13:42.459278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.800 [2024-07-12 17:13:42.462145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.800 [2024-07-12 17:13:42.471512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.800 [2024-07-12 17:13:42.471866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.800 [2024-07-12 17:13:42.471892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.800 [2024-07-12 17:13:42.471907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.800 [2024-07-12 17:13:42.472128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.800 [2024-07-12 17:13:42.472331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.800 [2024-07-12 17:13:42.472351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.801 [2024-07-12 17:13:42.472365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.801 [2024-07-12 17:13:42.475235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.801 [2024-07-12 17:13:42.484689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.801 [2024-07-12 17:13:42.485063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.801 [2024-07-12 17:13:42.485088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:42.801 [2024-07-12 17:13:42.485103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:42.801 [2024-07-12 17:13:42.485287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:42.801 [2024-07-12 17:13:42.485474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.801 [2024-07-12 17:13:42.485493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.801 [2024-07-12 17:13:42.485505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.801 [2024-07-12 17:13:42.488379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.059 [2024-07-12 17:13:42.498264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.059 [2024-07-12 17:13:42.498668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.059 [2024-07-12 17:13:42.498694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.059 [2024-07-12 17:13:42.498709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.059 [2024-07-12 17:13:42.498928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.059 [2024-07-12 17:13:42.499141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.059 [2024-07-12 17:13:42.499177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.059 [2024-07-12 17:13:42.499190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.059 [2024-07-12 17:13:42.502566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.059 [2024-07-12 17:13:42.511925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.059 [2024-07-12 17:13:42.512365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.059 [2024-07-12 17:13:42.512391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.059 [2024-07-12 17:13:42.512406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.059 [2024-07-12 17:13:42.512600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.059 [2024-07-12 17:13:42.512834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.059 [2024-07-12 17:13:42.512857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.059 [2024-07-12 17:13:42.512871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.059 [2024-07-12 17:13:42.516189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.059 [2024-07-12 17:13:42.525620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.059 [2024-07-12 17:13:42.526072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.059 [2024-07-12 17:13:42.526125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.059 [2024-07-12 17:13:42.526140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.059 [2024-07-12 17:13:42.526335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.059 [2024-07-12 17:13:42.526567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.059 [2024-07-12 17:13:42.526586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.059 [2024-07-12 17:13:42.526600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.059 [2024-07-12 17:13:42.529829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.059 [2024-07-12 17:13:42.538948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.059 [2024-07-12 17:13:42.539386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.059 [2024-07-12 17:13:42.539415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.059 [2024-07-12 17:13:42.539433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.059 [2024-07-12 17:13:42.539618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.059 [2024-07-12 17:13:42.539847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.059 [2024-07-12 17:13:42.539869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.539884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.542863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.552259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.552656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.552708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.552723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.552960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.553177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.553198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.553211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.556191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.565552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.565961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.565987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.566001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.566199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.566386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.566405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.566417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.569284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.578600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.578979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.579003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.579018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.579201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.579388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.579411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.579424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.582292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.591706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.592119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.592143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.592157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.592340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.592528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.592546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.592558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.595384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.604779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.605125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.605149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.605163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.605347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.605533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.605551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.605564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.608392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.617891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.618289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.618324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.618337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.618521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.618708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.618752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.618767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.621608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.631033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.631434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.631459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.631479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.631662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.631880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.631901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.631913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.634772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.644092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.644434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.644458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.644472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.644655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.644852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.644871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.644884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.647702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.657161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.657545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.060 [2024-07-12 17:13:42.657570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.060 [2024-07-12 17:13:42.657584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.060 [2024-07-12 17:13:42.657802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.060 [2024-07-12 17:13:42.657995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.060 [2024-07-12 17:13:42.658030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.060 [2024-07-12 17:13:42.658043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.060 [2024-07-12 17:13:42.660871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.060 [2024-07-12 17:13:42.670260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.060 [2024-07-12 17:13:42.670599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.670637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.670836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.671023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.671042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.671055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.673907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.683346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.683754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.683781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.683795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.683984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.684187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.684207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.684219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.687047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.696496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.696874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.696899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.696913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.697096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.697283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.697303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.697315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.700066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.709527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.709890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.709915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.709929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.710130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.710317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.710336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.710352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.713349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.722762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.723085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.723110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.723124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.723308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.723495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.723514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.723526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.726386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.735875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.736210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.736261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.736275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.736459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.736646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.736665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.736678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.061 [2024-07-12 17:13:42.739546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.061 [2024-07-12 17:13:42.749061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.061 [2024-07-12 17:13:42.749483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.061 [2024-07-12 17:13:42.749524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.061 [2024-07-12 17:13:42.749540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.061 [2024-07-12 17:13:42.749770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.061 [2024-07-12 17:13:42.749997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.061 [2024-07-12 17:13:42.750033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.061 [2024-07-12 17:13:42.750048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.753510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.762302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.762681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.762709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.762724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.762933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.763140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.763160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.763173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.766045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.775476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.775831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.775857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.775872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.776056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.776244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.776264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.776277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.779147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.788609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.788943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.788969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.788984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.789185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.789373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.789391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.789404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.792345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.801678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.802031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.802070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.802084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.802267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.802459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.802479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.802491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.805357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.814680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.815040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.815094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.815278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.815465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.815484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.815497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.818359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.827805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.828254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.828303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.828316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.828499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.828686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.828705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.828732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.831753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.841190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.841510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.841535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.841550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.841758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.841986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.842012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.842026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.845050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.854564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.854923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.854951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.854967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.855189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.855376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.855395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.855408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.858459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.867982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.868391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.868417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.868432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.868621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.868872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.868895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.868910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.871960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.881175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.881631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.881656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.881671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.881890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.882104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.882125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.882138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.885086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.894451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.894853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.894879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.894897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.895087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.895279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.895298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.895310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.898259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.907618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.908053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.908079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.908093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.908282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.908475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.908493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.908506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.911456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.920832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.921230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.921255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.921269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.921459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.921651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.921672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.319 [2024-07-12 17:13:42.921685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.319 [2024-07-12 17:13:42.924643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.319 [2024-07-12 17:13:42.934083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.319 [2024-07-12 17:13:42.934474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.319 [2024-07-12 17:13:42.934511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.319 [2024-07-12 17:13:42.934525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.319 [2024-07-12 17:13:42.934714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.319 [2024-07-12 17:13:42.934938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.319 [2024-07-12 17:13:42.934962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:42.934976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:42.937922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.320 [2024-07-12 17:13:42.947293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.320 [2024-07-12 17:13:42.947629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.320 [2024-07-12 17:13:42.947656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.320 [2024-07-12 17:13:42.947671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.320 [2024-07-12 17:13:42.947891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.320 [2024-07-12 17:13:42.948105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.320 [2024-07-12 17:13:42.948126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:42.948140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:42.951084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.320 [2024-07-12 17:13:42.960601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.320 [2024-07-12 17:13:42.961059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.320 [2024-07-12 17:13:42.961095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.320 [2024-07-12 17:13:42.961109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.320 [2024-07-12 17:13:42.961298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.320 [2024-07-12 17:13:42.961491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.320 [2024-07-12 17:13:42.961511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:42.961524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:42.964473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.320 [2024-07-12 17:13:42.973880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.320 [2024-07-12 17:13:42.974323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.320 [2024-07-12 17:13:42.974348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.320 [2024-07-12 17:13:42.974363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.320 [2024-07-12 17:13:42.974563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.320 [2024-07-12 17:13:42.974783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.320 [2024-07-12 17:13:42.974804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:42.974818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:42.977768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.320 [2024-07-12 17:13:42.987146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.320 [2024-07-12 17:13:42.987561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.320 [2024-07-12 17:13:42.987586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.320 [2024-07-12 17:13:42.987601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.320 [2024-07-12 17:13:42.987819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.320 [2024-07-12 17:13:42.988018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.320 [2024-07-12 17:13:42.988053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:42.988066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:42.990996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.320 [2024-07-12 17:13:43.000505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.320 [2024-07-12 17:13:43.000887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.320 [2024-07-12 17:13:43.000914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.320 [2024-07-12 17:13:43.000929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.320 [2024-07-12 17:13:43.001156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.320 [2024-07-12 17:13:43.001351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.320 [2024-07-12 17:13:43.001371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.320 [2024-07-12 17:13:43.001385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.320 [2024-07-12 17:13:43.004663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.014243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.014652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.014692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.014915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.015150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.015170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.578 [2024-07-12 17:13:43.015183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.578 [2024-07-12 17:13:43.018498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.027474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.027843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.027870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.027892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.028103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.028297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.028318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.578 [2024-07-12 17:13:43.028331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.578 [2024-07-12 17:13:43.031279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.040634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.041049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.041075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.041090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.041280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.041472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.041491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.578 [2024-07-12 17:13:43.041504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.578 [2024-07-12 17:13:43.044453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.053889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.054280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.054305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.054327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.054516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.054708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.054761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.578 [2024-07-12 17:13:43.054778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.578 [2024-07-12 17:13:43.057715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.067119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.067484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.067510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.067525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.067729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.067938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.067969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.578 [2024-07-12 17:13:43.067987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.578 [2024-07-12 17:13:43.070933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.578 [2024-07-12 17:13:43.080320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.578 [2024-07-12 17:13:43.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.578 [2024-07-12 17:13:43.080761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.578 [2024-07-12 17:13:43.080779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.578 [2024-07-12 17:13:43.080974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.578 [2024-07-12 17:13:43.081182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.578 [2024-07-12 17:13:43.081202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.081215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.084163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.093517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.093946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.093973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.093989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.094196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.094388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.094409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.094422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.097374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.106746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.107177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.107202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.107216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.107406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.107597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.107616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.107629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.110620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.120008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.120399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.120425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.120440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.120630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.120851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.120873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.120887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.123843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.133229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.133598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.133633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.133648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.133867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.134080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.134101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.134113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.137060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.146417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.146802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.146829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.146845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.147055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.147247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.147268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.147281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.150229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.159583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.160030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.160056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.160070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.160264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.160457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.160476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.160489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.163439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.172822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.173225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.173250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.173265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.173468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.173663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.173684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.173697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.176822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.186006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.186395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.186421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.186436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.186625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.186847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.186867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.186881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.189825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.199240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.199640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.199666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.199681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.199899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.200111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.200132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.200150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.203097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.212519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.212952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.212979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.212994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.213199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.213392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.579 [2024-07-12 17:13:43.213410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.579 [2024-07-12 17:13:43.213423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.579 [2024-07-12 17:13:43.216374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.579 [2024-07-12 17:13:43.225812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.579 [2024-07-12 17:13:43.226222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.579 [2024-07-12 17:13:43.226256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.579 [2024-07-12 17:13:43.226271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.579 [2024-07-12 17:13:43.226460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.579 [2024-07-12 17:13:43.226652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.580 [2024-07-12 17:13:43.226672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.580 [2024-07-12 17:13:43.226684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.580 [2024-07-12 17:13:43.229634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.580 [2024-07-12 17:13:43.239070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.580 [2024-07-12 17:13:43.239477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.580 [2024-07-12 17:13:43.239502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.580 [2024-07-12 17:13:43.239517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.580 [2024-07-12 17:13:43.239708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.580 [2024-07-12 17:13:43.239930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.580 [2024-07-12 17:13:43.239950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.580 [2024-07-12 17:13:43.239963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.580 [2024-07-12 17:13:43.242908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.580 [2024-07-12 17:13:43.252277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.580 [2024-07-12 17:13:43.252702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.580 [2024-07-12 17:13:43.252732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.580 [2024-07-12 17:13:43.252772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.580 [2024-07-12 17:13:43.252987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.580 [2024-07-12 17:13:43.253216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.580 [2024-07-12 17:13:43.253238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.580 [2024-07-12 17:13:43.253252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.580 [2024-07-12 17:13:43.256703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.580 [2024-07-12 17:13:43.265567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.580 [2024-07-12 17:13:43.266029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.580 [2024-07-12 17:13:43.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.580 [2024-07-12 17:13:43.266070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.580 [2024-07-12 17:13:43.266282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.580 [2024-07-12 17:13:43.266475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.580 [2024-07-12 17:13:43.266494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.580 [2024-07-12 17:13:43.266506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.580 [2024-07-12 17:13:43.269783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.839 [2024-07-12 17:13:43.279127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.839 [2024-07-12 17:13:43.279493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.839 [2024-07-12 17:13:43.279519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.839 [2024-07-12 17:13:43.279534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.839 [2024-07-12 17:13:43.279749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.839 [2024-07-12 17:13:43.279969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.839 [2024-07-12 17:13:43.279991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.839 [2024-07-12 17:13:43.280005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.839 [2024-07-12 17:13:43.283204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.839 [2024-07-12 17:13:43.292395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.839 [2024-07-12 17:13:43.292781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.839 [2024-07-12 17:13:43.292807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.839 [2024-07-12 17:13:43.292822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.839 [2024-07-12 17:13:43.293031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.839 [2024-07-12 17:13:43.293229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.839 [2024-07-12 17:13:43.293250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.839 [2024-07-12 17:13:43.293263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.839 [2024-07-12 17:13:43.296214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.839 [2024-07-12 17:13:43.305563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.839 [2024-07-12 17:13:43.306010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.839 [2024-07-12 17:13:43.306051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.839 [2024-07-12 17:13:43.306066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.839 [2024-07-12 17:13:43.306255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.839 [2024-07-12 17:13:43.306448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.839 [2024-07-12 17:13:43.306467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.839 [2024-07-12 17:13:43.306479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.839 [2024-07-12 17:13:43.309465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.839 [2024-07-12 17:13:43.318844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.839 [2024-07-12 17:13:43.319257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.839 [2024-07-12 17:13:43.319293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.839 [2024-07-12 17:13:43.319307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.319497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.319689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.319707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.319735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.322672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.332083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.332485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.332511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.332538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.332753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.332952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.332974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.332987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.335934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.345306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.345706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.345752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.345769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.345966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.346175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.346195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.346208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.349156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.358511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.358950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.358977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.358992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.359199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.359392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.359412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.359425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.362375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.371789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.372143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.372184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.372374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.372565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.372586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.372599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.375556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.384945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.385337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.385363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.385382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.385572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.385794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.385814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.385827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.388767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.398140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.398527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.398553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.398568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.398784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.398983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.399004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.399017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.401959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.411395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.411799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.411826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.411841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.412051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.412243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.412263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.412276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.415226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.424588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.424987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.425014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.425029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.425235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.425427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.425452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.425466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.428415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.437981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.438386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.438412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.438427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.438615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.438841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.438862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.438876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.441841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.451221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.451649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-07-12 17:13:43.451674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-07-12 17:13:43.451688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.840 [2024-07-12 17:13:43.451913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.840 [2024-07-12 17:13:43.452125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-07-12 17:13:43.452147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-07-12 17:13:43.452159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-07-12 17:13:43.455106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-07-12 17:13:43.464477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-07-12 17:13:43.464866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.464893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.464909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.841 [2024-07-12 17:13:43.465117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.841 [2024-07-12 17:13:43.465309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.841 [2024-07-12 17:13:43.465330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.841 [2024-07-12 17:13:43.465343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.841 [2024-07-12 17:13:43.468288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.841 [2024-07-12 17:13:43.477660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.841 [2024-07-12 17:13:43.478142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.478168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.478186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.841 [2024-07-12 17:13:43.478376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.841 [2024-07-12 17:13:43.478568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.841 [2024-07-12 17:13:43.478587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.841 [2024-07-12 17:13:43.478599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.841 [2024-07-12 17:13:43.481549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.841 [2024-07-12 17:13:43.490934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.841 [2024-07-12 17:13:43.491373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.491399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.491414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.841 [2024-07-12 17:13:43.491603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.841 [2024-07-12 17:13:43.491824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.841 [2024-07-12 17:13:43.491844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.841 [2024-07-12 17:13:43.491858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.841 [2024-07-12 17:13:43.494800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.841 [2024-07-12 17:13:43.504177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.841 [2024-07-12 17:13:43.504549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.504575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.504590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.841 [2024-07-12 17:13:43.504806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.841 [2024-07-12 17:13:43.505005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.841 [2024-07-12 17:13:43.505025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.841 [2024-07-12 17:13:43.505038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.841 [2024-07-12 17:13:43.508195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.841 [2024-07-12 17:13:43.517537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.841 [2024-07-12 17:13:43.517987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.518015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.518046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:43.841 [2024-07-12 17:13:43.518247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:43.841 [2024-07-12 17:13:43.518445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.841 [2024-07-12 17:13:43.518466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.841 [2024-07-12 17:13:43.518480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.841 [2024-07-12 17:13:43.521460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.841 [2024-07-12 17:13:43.531306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.841 [2024-07-12 17:13:43.531729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.841 [2024-07-12 17:13:43.531779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:43.841 [2024-07-12 17:13:43.531797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.099 [2024-07-12 17:13:43.532011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.099 [2024-07-12 17:13:43.532234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.099 [2024-07-12 17:13:43.532271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.099 [2024-07-12 17:13:43.532286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.099 [2024-07-12 17:13:43.535420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.099 [2024-07-12 17:13:43.544698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.099 [2024-07-12 17:13:43.545067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.099 [2024-07-12 17:13:43.545093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.099 [2024-07-12 17:13:43.545123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.099 [2024-07-12 17:13:43.545312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.099 [2024-07-12 17:13:43.545508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.099 [2024-07-12 17:13:43.545528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.099 [2024-07-12 17:13:43.545540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.099 [2024-07-12 17:13:43.548592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.099 [2024-07-12 17:13:43.558096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.099 [2024-07-12 17:13:43.558484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.099 [2024-07-12 17:13:43.558511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.099 [2024-07-12 17:13:43.558526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.099 [2024-07-12 17:13:43.558750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.099 [2024-07-12 17:13:43.558970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.099 [2024-07-12 17:13:43.558991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.099 [2024-07-12 17:13:43.559010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.099 [2024-07-12 17:13:43.562097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.099 [2024-07-12 17:13:43.571365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.099 [2024-07-12 17:13:43.571783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.099 [2024-07-12 17:13:43.571810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.099 [2024-07-12 17:13:43.571827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.099 [2024-07-12 17:13:43.572042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.099 [2024-07-12 17:13:43.572247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.099 [2024-07-12 17:13:43.572268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.572281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.575474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.584775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.585113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.585138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.585152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.585341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.585535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.585554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.585567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.588519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.598102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.598442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.598468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.598483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.598672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.598892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.598913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.598926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.601876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.611366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.611742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.611768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.611783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.611977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.612187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.612207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.612220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.615172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.624563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.624889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.624916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.624931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.625138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.625331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.625351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.625364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.628314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.637872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.638206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.638231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.638246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.638435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.638633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.638654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.638668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.641620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.651201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.651530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.651556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.651571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.651790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.651990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.652010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.652023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.654972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.664523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.664855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.664881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.664897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.665107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.665300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.665319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.665332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.668284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.677853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.678194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.678220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.678234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.678423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.678617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.678636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.678649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.681600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.691171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.691629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.691654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.691668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.691884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.692097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.692118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.692138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.695086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.704491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.704862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.704888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.704903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.705112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.705305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.100 [2024-07-12 17:13:43.705326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.100 [2024-07-12 17:13:43.705340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.100 [2024-07-12 17:13:43.708285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.100 [2024-07-12 17:13:43.717648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.100 [2024-07-12 17:13:43.718099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.100 [2024-07-12 17:13:43.718151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.100 [2024-07-12 17:13:43.718166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.100 [2024-07-12 17:13:43.718355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.100 [2024-07-12 17:13:43.718546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.718565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.718577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.721527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.101 [2024-07-12 17:13:43.730915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.101 [2024-07-12 17:13:43.731316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.101 [2024-07-12 17:13:43.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.101 [2024-07-12 17:13:43.731355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.101 [2024-07-12 17:13:43.731544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.101 [2024-07-12 17:13:43.731762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.731783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.731796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.734766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.101 [2024-07-12 17:13:43.744099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.101 [2024-07-12 17:13:43.744515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.101 [2024-07-12 17:13:43.744544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.101 [2024-07-12 17:13:43.744559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.101 [2024-07-12 17:13:43.744778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.101 [2024-07-12 17:13:43.744974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.744994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.745008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.747868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.101 [2024-07-12 17:13:43.757143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.101 [2024-07-12 17:13:43.757578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.101 [2024-07-12 17:13:43.757603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.101 [2024-07-12 17:13:43.757618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.101 [2024-07-12 17:13:43.757837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.101 [2024-07-12 17:13:43.758048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.758070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.758084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.761120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.101 [2024-07-12 17:13:43.770435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.101 [2024-07-12 17:13:43.770816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.101 [2024-07-12 17:13:43.770843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.101 [2024-07-12 17:13:43.770859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.101 [2024-07-12 17:13:43.771069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.101 [2024-07-12 17:13:43.771256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.771276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.771288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.774189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.101 [2024-07-12 17:13:43.783526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.101 [2024-07-12 17:13:43.783967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.101 [2024-07-12 17:13:43.784019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.101 [2024-07-12 17:13:43.784034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.101 [2024-07-12 17:13:43.784233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.101 [2024-07-12 17:13:43.784426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.101 [2024-07-12 17:13:43.784445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.101 [2024-07-12 17:13:43.784457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.101 [2024-07-12 17:13:43.787326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.359 [2024-07-12 17:13:43.796790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.359 [2024-07-12 17:13:43.797286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.359 [2024-07-12 17:13:43.797311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.359 [2024-07-12 17:13:43.797342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.359 [2024-07-12 17:13:43.797543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.359 [2024-07-12 17:13:43.797794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.359 [2024-07-12 17:13:43.797831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.359 [2024-07-12 17:13:43.797845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.359 [2024-07-12 17:13:43.800812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.359 [2024-07-12 17:13:43.809944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.359 [2024-07-12 17:13:43.810345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.359 [2024-07-12 17:13:43.810395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.359 [2024-07-12 17:13:43.810409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.359 [2024-07-12 17:13:43.810592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.359 [2024-07-12 17:13:43.810808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.359 [2024-07-12 17:13:43.810828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.359 [2024-07-12 17:13:43.810841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.359 [2024-07-12 17:13:43.813683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.359 [2024-07-12 17:13:43.823029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.359 [2024-07-12 17:13:43.823400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.823424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.823438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.823621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.823838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.823858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.823871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.826745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.836204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.836580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.836610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.836625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.836820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.837007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.837027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.837040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.840026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.849425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.849779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.849806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.849821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.850030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.850219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.850238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.850250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.853176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.862622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.862959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.862985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.863000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.863201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.863389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.863408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.863420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.866283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.875854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.876221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.876274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.876292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.876478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.876666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.876685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.876698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.879603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.889086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.889434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.889459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.889487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.889671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.889885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.889905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.889918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.892774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.902213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.902586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.902610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.902625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.902845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.903038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.903072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.903085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.905933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.915257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.915651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.915675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.915688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.915909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.916116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.916138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.916152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.919005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.928512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.928898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.928947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.928962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.929163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.929352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.929372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.929385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.932218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.941645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.942034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.942060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.942074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.942272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.942460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.942479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.942492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.945361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.954816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.360 [2024-07-12 17:13:43.955186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.360 [2024-07-12 17:13:43.955239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.360 [2024-07-12 17:13:43.955253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.360 [2024-07-12 17:13:43.955436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.360 [2024-07-12 17:13:43.955623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.360 [2024-07-12 17:13:43.955643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.360 [2024-07-12 17:13:43.955655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.360 [2024-07-12 17:13:43.958525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.360 [2024-07-12 17:13:43.967984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:43.968327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:43.968351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:43.968365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:43.968549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:43.968762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:43.968783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:43.968796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:43.971636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:43.981011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:43.981409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:43.981435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:43.981450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:43.981634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:43.981866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:43.981887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:43.981900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:43.984782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:43.994232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:43.994582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:43.994608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:43.994622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:43.994837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:43.995045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:43.995065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:43.995078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:43.997932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:44.007434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:44.007821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:44.007848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:44.007875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:44.008088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:44.008281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:44.008302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:44.008315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:44.011559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:44.020683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:44.021149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:44.021201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:44.021215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:44.021399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:44.021586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:44.021604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:44.021617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:44.024486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:44.033767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:44.034150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:44.034175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:44.034190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:44.034374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:44.034563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:44.034583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:44.034596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:44.037465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.361 [2024-07-12 17:13:44.046914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.361 [2024-07-12 17:13:44.047330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.361 [2024-07-12 17:13:44.047355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.361 [2024-07-12 17:13:44.047370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.361 [2024-07-12 17:13:44.047555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.361 [2024-07-12 17:13:44.047770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.361 [2024-07-12 17:13:44.047792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.361 [2024-07-12 17:13:44.047810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.361 [2024-07-12 17:13:44.050970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.620 [2024-07-12 17:13:44.060366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.620 [2024-07-12 17:13:44.060752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.620 [2024-07-12 17:13:44.060778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.620 [2024-07-12 17:13:44.060792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.620 [2024-07-12 17:13:44.060982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.620 [2024-07-12 17:13:44.061185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.620 [2024-07-12 17:13:44.061206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.620 [2024-07-12 17:13:44.061219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.620 [2024-07-12 17:13:44.064093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.620 [2024-07-12 17:13:44.073452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.620 [2024-07-12 17:13:44.073806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.620 [2024-07-12 17:13:44.073831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.620 [2024-07-12 17:13:44.073846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.620 [2024-07-12 17:13:44.074029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.620 [2024-07-12 17:13:44.074216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.620 [2024-07-12 17:13:44.074234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.620 [2024-07-12 17:13:44.074247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.620 [2024-07-12 17:13:44.077127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.620 [2024-07-12 17:13:44.086550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.620 [2024-07-12 17:13:44.086938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.620 [2024-07-12 17:13:44.086974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.620 [2024-07-12 17:13:44.086988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.620 [2024-07-12 17:13:44.087172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.620 [2024-07-12 17:13:44.087359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.620 [2024-07-12 17:13:44.087379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.620 [2024-07-12 17:13:44.087391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.620 [2024-07-12 17:13:44.090258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.620 [2024-07-12 17:13:44.099673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.620 [2024-07-12 17:13:44.100119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.620 [2024-07-12 17:13:44.100144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.620 [2024-07-12 17:13:44.100159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.100342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.100529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.100548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.100560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.103390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.112824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.113247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.113272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.113285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.113469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.113655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.113674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.113687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.116558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.126033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.126413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.126439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.126454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.126638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.126856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.126878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.126891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.129690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.138985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.139406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.139430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.139446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.139630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.139832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.139852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.139864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.142691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.152174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.152538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.152563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.152577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.152788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.152983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.153004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.153031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.155869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.165142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.165552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.165577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.165590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.165804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.165997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.166017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.166029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.168889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.178286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.178683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.178708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.178748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.178941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.179146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.179166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.179179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.181972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.191373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.191785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.191810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.191825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.192008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.192195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.192213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.192225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.194976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.204364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.204769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.204795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.204810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.204995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.205182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.205202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.205215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.208123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.217490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 [2024-07-12 17:13:44.217847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.217873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.217887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 [2024-07-12 17:13:44.218090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 [2024-07-12 17:13:44.218277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.218296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.218308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 [2024-07-12 17:13:44.221135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.621 [2024-07-12 17:13:44.230571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1224936 Killed "${NVMF_APP[@]}" "$@" 00:24:44.621 [2024-07-12 17:13:44.230955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.621 [2024-07-12 17:13:44.230981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.621 [2024-07-12 17:13:44.230996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.621 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:44.621 [2024-07-12 17:13:44.231212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.621 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:44.621 [2024-07-12 17:13:44.231404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.621 [2024-07-12 17:13:44.231426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.621 [2024-07-12 17:13:44.231439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.621 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:44.621 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.621 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.622 [2024-07-12 17:13:44.234580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1225891 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1225891 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1225891 ']' 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.622 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.622 [2024-07-12 17:13:44.243930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.244293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.244317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.244331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.244519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.244712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.244758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.244774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.622 [2024-07-12 17:13:44.247865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 [2024-07-12 17:13:44.257221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.257538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.257564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.257582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.257804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.258017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.258052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.258065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.622 [2024-07-12 17:13:44.261346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 [2024-07-12 17:13:44.270560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.270926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.270954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.270970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.271193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.271387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.271407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.271419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.622 [2024-07-12 17:13:44.274442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 [2024-07-12 17:13:44.279119] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:44.622 [2024-07-12 17:13:44.279174] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.622 [2024-07-12 17:13:44.283849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.284212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.284237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.284252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.284442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.284634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.284654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.284666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.622 [2024-07-12 17:13:44.287617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 [2024-07-12 17:13:44.297078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.297411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.297436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.297455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.297646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.297867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.297888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.297902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.622 [2024-07-12 17:13:44.301003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.622 [2024-07-12 17:13:44.310625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.622 [2024-07-12 17:13:44.310965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.622 [2024-07-12 17:13:44.310992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.622 [2024-07-12 17:13:44.311007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.622 [2024-07-12 17:13:44.311231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.622 [2024-07-12 17:13:44.311471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.622 [2024-07-12 17:13:44.311492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.622 [2024-07-12 17:13:44.311506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.881 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.881 [2024-07-12 17:13:44.314849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.881 [2024-07-12 17:13:44.324113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.881 [2024-07-12 17:13:44.324440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.881 [2024-07-12 17:13:44.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.881 [2024-07-12 17:13:44.324480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.881 [2024-07-12 17:13:44.324669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.881 [2024-07-12 17:13:44.324913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.881 [2024-07-12 17:13:44.324936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.881 [2024-07-12 17:13:44.324949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.881 [2024-07-12 17:13:44.327975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.881 [2024-07-12 17:13:44.337385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.881 [2024-07-12 17:13:44.337743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.881 [2024-07-12 17:13:44.337770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.881 [2024-07-12 17:13:44.337785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.881 [2024-07-12 17:13:44.337987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.881 [2024-07-12 17:13:44.338196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.881 [2024-07-12 17:13:44.338221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.881 [2024-07-12 17:13:44.338235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.881 [2024-07-12 17:13:44.341224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.881 [2024-07-12 17:13:44.344013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.881 [2024-07-12 17:13:44.350564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.881 [2024-07-12 17:13:44.350988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.881 [2024-07-12 17:13:44.351017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.881 [2024-07-12 17:13:44.351034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.881 [2024-07-12 17:13:44.351245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.881 [2024-07-12 17:13:44.351441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.881 [2024-07-12 17:13:44.351461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.881 [2024-07-12 17:13:44.351476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.881 [2024-07-12 17:13:44.354429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.881 [2024-07-12 17:13:44.363840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.881 [2024-07-12 17:13:44.364221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.881 [2024-07-12 17:13:44.364252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.881 [2024-07-12 17:13:44.364268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.881 [2024-07-12 17:13:44.364464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.881 [2024-07-12 17:13:44.364660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.881 [2024-07-12 17:13:44.364680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.881 [2024-07-12 17:13:44.364695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.881 [2024-07-12 17:13:44.367643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.881 [2024-07-12 17:13:44.377082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.881 [2024-07-12 17:13:44.377395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.881 [2024-07-12 17:13:44.377421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.881 [2024-07-12 17:13:44.377437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.377626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.377849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.377871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.377885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.380833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.390408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.390750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.390777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.390793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.390989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.391199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.391219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.391232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.394182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.403750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.404192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.404225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.404243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.404442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.404639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.404659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.404675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.407628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.417111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.417473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.417501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.417519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.417711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.417937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.417958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.417973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.420924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.430317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.430627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.430653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.430679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.430901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.431115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.431135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.431149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.434101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.443631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.443994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.444021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.444036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.444240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.444433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.444453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.444466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.447420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.449776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.882 [2024-07-12 17:13:44.449806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.882 [2024-07-12 17:13:44.449819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.882 [2024-07-12 17:13:44.449831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.882 [2024-07-12 17:13:44.449842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.882 [2024-07-12 17:13:44.449891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.882 [2024-07-12 17:13:44.449952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.882 [2024-07-12 17:13:44.449955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.882 [2024-07-12 17:13:44.457190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.457600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.457632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.457652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.457894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.458124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.458146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.458162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.461296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.470806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.882 [2024-07-12 17:13:44.471287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.882 [2024-07-12 17:13:44.471324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.882 [2024-07-12 17:13:44.471344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.882 [2024-07-12 17:13:44.471558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.882 [2024-07-12 17:13:44.471797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.882 [2024-07-12 17:13:44.471821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.882 [2024-07-12 17:13:44.471839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.882 [2024-07-12 17:13:44.474981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.882 [2024-07-12 17:13:44.484416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.484920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.484959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.484980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.485213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.485424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.485445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.485463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.488515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 [2024-07-12 17:13:44.497937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.498397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.498432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.498453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.498664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.498906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.498929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.498948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.502095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 [2024-07-12 17:13:44.511573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.512116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.512154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.512184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.512403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.512644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.512669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.512688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.516034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 [2024-07-12 17:13:44.525243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.525796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.525845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.525867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.526107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.526319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.526343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.526361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.529495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 [2024-07-12 17:13:44.538761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.539142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.539170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.539186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.539387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.539593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.539615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.539629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.542782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 [2024-07-12 17:13:44.552239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.552668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.552696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.552728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.552951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.553183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.553213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.553229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.556461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.883 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.883 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:44.883 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.883 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.883 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.883 [2024-07-12 17:13:44.565862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.883 [2024-07-12 17:13:44.566220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.883 [2024-07-12 17:13:44.566248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:44.883 [2024-07-12 17:13:44.566265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:44.883 [2024-07-12 17:13:44.566465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:44.883 [2024-07-12 17:13:44.566671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.883 [2024-07-12 17:13:44.566692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.883 [2024-07-12 17:13:44.566706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.883 [2024-07-12 17:13:44.569952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 [2024-07-12 17:13:44.579409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.142 [2024-07-12 17:13:44.579794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-12 17:13:44.579824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.142 [2024-07-12 17:13:44.579840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.142 [2024-07-12 17:13:44.580055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.142 [2024-07-12 17:13:44.580286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.142 [2024-07-12 17:13:44.580308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.142 [2024-07-12 17:13:44.580322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.142 [2024-07-12 17:13:44.583556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.142 [2024-07-12 17:13:44.592882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.142 [2024-07-12 17:13:44.593290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-12 17:13:44.593316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.142 [2024-07-12 17:13:44.593331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.142 [2024-07-12 17:13:44.593546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.142 [2024-07-12 17:13:44.593780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.142 [2024-07-12 17:13:44.593803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.142 [2024-07-12 17:13:44.593818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.142 [2024-07-12 17:13:44.594244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.142 [2024-07-12 17:13:44.597106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.142 [2024-07-12 17:13:44.606472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.142 [2024-07-12 17:13:44.606845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-12 17:13:44.606874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.142 [2024-07-12 17:13:44.606890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.142 [2024-07-12 17:13:44.607129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.142 [2024-07-12 17:13:44.607328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.142 [2024-07-12 17:13:44.607349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.142 [2024-07-12 17:13:44.607363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.142 [2024-07-12 17:13:44.610508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 [2024-07-12 17:13:44.620029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.142 [2024-07-12 17:13:44.620454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-12 17:13:44.620481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.142 [2024-07-12 17:13:44.620496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.142 [2024-07-12 17:13:44.620706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.142 [2024-07-12 17:13:44.620941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.142 [2024-07-12 17:13:44.620963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.142 [2024-07-12 17:13:44.620977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.142 [2024-07-12 17:13:44.624165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 [2024-07-12 17:13:44.633493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.142 [2024-07-12 17:13:44.634028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.142 [2024-07-12 17:13:44.634083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.142 [2024-07-12 17:13:44.634104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.142 [2024-07-12 17:13:44.634343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.142 [2024-07-12 17:13:44.634555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.142 [2024-07-12 17:13:44.634578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.142 [2024-07-12 17:13:44.634596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.142 [2024-07-12 17:13:44.637752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.142 Malloc0 00:24:45.142 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.143 [2024-07-12 17:13:44.647149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.143 [2024-07-12 17:13:44.647577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.143 [2024-07-12 17:13:44.647604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a78540 with addr=10.0.0.2, port=4420 00:24:45.143 [2024-07-12 17:13:44.647619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78540 is same with the state(5) to be set 00:24:45.143 [2024-07-12 17:13:44.647868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a78540 (9): Bad file descriptor 00:24:45.143 [2024-07-12 17:13:44.648088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.143 [2024-07-12 17:13:44.648111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.143 [2024-07-12 17:13:44.648126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.143 [2024-07-12 17:13:44.651377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.143 [2024-07-12 17:13:44.660256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.143 [2024-07-12 17:13:44.660735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.143 17:13:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1225222 00:24:45.143 [2024-07-12 17:13:44.698535] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.127 00:24:55.127 Latency(us) 00:24:55.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.127 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.127 Verification LBA range: start 0x0 length 0x4000 00:24:55.127 Nvme1n1 : 15.01 6823.60 26.65 10269.60 0.00 7466.35 530.96 20971.52 00:24:55.127 =================================================================================================================== 00:24:55.127 Total : 6823.60 26.65 10269.60 0.00 7466.35 530.96 20971.52 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.127 rmmod nvme_tcp 00:24:55.127 rmmod nvme_fabrics 00:24:55.127 rmmod nvme_keyring 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1225891 ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1225891 ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225891' 00:24:55.127 killing process with pid 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1225891 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.127 17:13:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.030 17:13:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.030 00:24:57.030 real 0m22.701s 00:24:57.030 user 1m0.712s 00:24:57.030 sys 0m4.518s 00:24:57.030 17:13:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.030 17:13:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 ************************************ 00:24:57.030 END TEST nvmf_bdevperf 00:24:57.030 ************************************ 00:24:57.030 17:13:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:57.030 17:13:56 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:57.030 17:13:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:57.030 17:13:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.030 17:13:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.030 ************************************ 00:24:57.030 START TEST nvmf_target_disconnect 00:24:57.030 ************************************ 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:57.030 * Looking for test storage... 00:24:57.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.030 17:13:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:59.560 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:59.560 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:59.560 Found net devices under 0000:84:00.0: cvl_0_0 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:59.560 Found net devices under 0000:84:00.1: cvl_0_1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:24:59.560 00:24:59.560 --- 10.0.0.2 ping statistics --- 00:24:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.560 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:24:59.560 00:24:59.560 --- 10.0.0.1 ping statistics --- 00:24:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.560 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.560 ************************************ 00:24:59.560 START TEST nvmf_target_disconnect_tc1 00:24:59.560 ************************************ 00:24:59.560 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.561 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.561 [2024-07-12 17:13:58.942582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.561 [2024-07-12 17:13:58.942662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd50790 with addr=10.0.0.2, port=4420 00:24:59.561 [2024-07-12 17:13:58.942698] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:59.561 [2024-07-12 17:13:58.942752] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:59.561 [2024-07-12 17:13:58.942767] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:59.561 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:59.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:59.561 Initializing NVMe Controllers 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:59.561 00:24:59.561 real 0m0.093s 00:24:59.561 user 0m0.040s 00:24:59.561 sys 0m0.053s 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.561 ************************************ 00:24:59.561 END TEST nvmf_target_disconnect_tc1 00:24:59.561 ************************************ 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.561 17:13:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:59.561 ************************************ 00:24:59.561 START TEST nvmf_target_disconnect_tc2 00:24:59.561 ************************************ 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1229057 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1229057 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1229057 ']' 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.561 17:13:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.561 [2024-07-12 17:13:59.056986] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:24:59.561 [2024-07-12 17:13:59.057083] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.561 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.561 [2024-07-12 17:13:59.124111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.561 [2024-07-12 17:13:59.233111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.561 [2024-07-12 17:13:59.233163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.561 [2024-07-12 17:13:59.233186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.561 [2024-07-12 17:13:59.233197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.561 [2024-07-12 17:13:59.233206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.561 [2024-07-12 17:13:59.233287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:59.561 [2024-07-12 17:13:59.233351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:59.561 [2024-07-12 17:13:59.233428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:59.561 [2024-07-12 17:13:59.233432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.498 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 Malloc0 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 [2024-07-12 17:14:00.078269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 [2024-07-12 17:14:00.106511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1229214 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:00.499 17:14:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.499 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.049 17:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1229057 00:25:03.049 17:14:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Read completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Read completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Read completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Write completed with error (sct=0, sc=8) 00:25:03.049 starting I/O failed 00:25:03.049 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 [2024-07-12 17:14:02.132082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 [2024-07-12 17:14:02.132402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Write completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.050 Read completed with error (sct=0, sc=8) 00:25:03.050 starting I/O failed 00:25:03.051 [2024-07-12 17:14:02.132751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Read completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 Write completed with error (sct=0, sc=8) 00:25:03.051 starting I/O failed 00:25:03.051 [2024-07-12 17:14:02.133075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:03.051 [2024-07-12 17:14:02.133309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.133347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.133489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.133519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.133660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.133711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.133870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.133897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.134090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.134113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.134337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.134491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.134551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.134819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.134846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.134985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.135012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.135230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.135254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.135414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.135438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.135653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.135876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.135903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.136013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.136040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.136188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.136226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.136408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.136433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.136678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.136703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.136853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.136880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.137017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.137057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.137299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.137324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.137536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.137588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.137797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.137824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.137963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.137989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.138198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.138222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.138442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.138503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.051 qpair failed and we were unable to recover it. 00:25:03.051 [2024-07-12 17:14:02.138653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.051 [2024-07-12 17:14:02.138677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.138809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.138836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.138967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.139238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.139455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.139480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.139656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.139680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.139864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.139890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.140040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.140063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.140239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.140262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.140425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.140480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.140650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.140674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.140841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.140868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.141926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.141952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.142115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.142154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.142301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.142358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.142546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.142569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.142705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.142752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.142885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.142911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.143046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.143072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.143251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.143306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.143494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.143518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.143656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.143681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.143851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.143892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.144074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.144101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.144329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.144356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.144590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.144616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.144801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.144828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.144974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.144999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.145141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.145180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.145311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.145335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.145504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.145708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.145754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.145919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.145946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.146144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.146182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.052 [2024-07-12 17:14:02.146405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.052 [2024-07-12 17:14:02.146453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.052 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.146691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.146916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.146943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.147078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.147120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.147306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.147333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.147534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.147558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.147773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.147799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.148000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.148040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.148268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.148293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.148548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.148595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.148746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.148770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.148956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.148982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.149159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.149183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.149381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.149405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.149656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.149679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.149858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.149883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.149985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.150011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.150231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.150255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.150498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.150551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.150781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.150807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.150990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.151220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.151384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.151564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.151698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.151888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.151914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.152953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.152980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.153227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.153266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.153472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.153521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.153699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.153745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.153886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.153911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.154041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.154065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.154197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.154234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.154409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.154451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.154551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.154588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.053 [2024-07-12 17:14:02.154734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.053 [2024-07-12 17:14:02.154767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.053 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.154884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.154909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.155963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.155988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.156153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.156192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.156352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.156390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.156539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.156576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.156709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.156754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.156860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.156885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.157868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.157893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.158882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.158908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.159064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.159089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.159262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.159285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.159419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.159456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.159633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.159656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.159826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.159853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.160034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.160309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.160351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.160501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.160528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.054 [2024-07-12 17:14:02.160697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.054 [2024-07-12 17:14:02.160735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.054 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.160876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.160916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.161037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.161079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.161251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.161294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.161462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.161503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.161632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.161670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.161860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.161888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.162158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.162296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.162436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.162611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.162818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.162961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.163002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.163173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.163197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.163363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.163386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.163568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.163591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.163820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.163864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.164096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.164318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.164515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.164686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.164848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.164967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.165009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.165110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.165153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.165318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.165359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.165542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.165567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.165701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.165744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.166046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.166092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.166326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.166368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.166526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.166548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.166771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.166796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.166911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.166954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.167076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.167120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.167280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.167321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.167458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.167659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.167697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.167900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.167947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.168113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.168157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.168312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.168594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.168617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.168860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.168909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.055 qpair failed and we were unable to recover it. 00:25:03.055 [2024-07-12 17:14:02.169118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.055 [2024-07-12 17:14:02.169161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.169374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.169418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.169576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.169600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.169849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.169894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.170132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.170176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.170338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.170383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.170526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.170549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.170684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.170891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.170935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.171114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.171157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.171363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.171411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.171635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.171657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.171797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.171822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.171995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.172048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.172243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.172289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.172442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.172484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.172656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.172679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.172865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.172913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.173076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.173125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.173325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.173369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.173607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.173630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.173916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.173959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.174165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.174210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.174395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.174441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.174583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.174606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.174856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.174903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.175062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.175364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.175387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.175541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.175564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.175698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.175735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.175859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.175884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.176930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.176955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.177136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.177188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.177377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.177423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.177667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.177691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.056 [2024-07-12 17:14:02.177909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.056 [2024-07-12 17:14:02.177957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.056 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.178139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.178186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.178386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.178433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.178663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.178687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.178906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.178953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.179132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.179178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.179384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.179436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.179622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.179646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.179904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.179953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.180156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.180203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.180370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.180418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.180665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.180689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.180884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.180932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.181080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.181131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.181282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.181327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.181480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.181524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.181671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.181708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.181927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.181973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.182159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.182210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.182456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.182502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.182710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.182734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.182988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.183037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.183287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.183532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.183580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.183790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.183838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.184007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.184053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.184293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.184338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.184588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.184635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.184780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.184803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.184997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.185049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.185249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.185298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.185513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.185563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.185764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.185789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.186018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.186069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.186312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.186360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.186527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.186574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.186771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.186811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.186998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.187053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.187248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.187296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.187498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.187551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.187789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.187814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.057 [2024-07-12 17:14:02.188070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.057 [2024-07-12 17:14:02.188119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.057 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.188338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.188387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.188637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.188685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.188887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.188913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.189127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.189186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.189332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.189379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.189622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.189902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.189927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.190131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.190179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.190438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.190485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.190685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.190709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.190871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.190894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.191057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.191112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.191290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.191343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.191547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.191595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.191783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.191809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.192006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.192057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.192237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.192285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.192535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.192581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.192782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.192835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.192968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.193021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.193175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.193224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.193465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.193516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.193703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.193746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.193982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.194034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.194248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.194297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.194548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.194594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.194903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.194950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.195203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.195254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.195440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.195488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.195757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.195796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.196004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.196053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.196302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.196352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.196538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.196587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.196809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.196834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.197034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.197082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.197309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.197359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.197509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.197557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.197787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.197811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.198045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.058 [2024-07-12 17:14:02.198237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.058 [2024-07-12 17:14:02.198289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.058 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.198495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.198544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.198696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.198731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.198963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.199012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.199220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.199268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.199414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.199463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.199698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.199722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.199917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.199940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.200177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.200228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.200361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.200384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.200615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.200654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.200883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.200930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.201066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.201119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.201262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.201315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.201551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.201601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.201803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.201828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.202021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.202067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.202302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.202349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.202568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.202592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.202782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.202807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.203035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.203093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.203292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.203341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.203584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.203609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.203792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.203817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.204031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.204086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.204323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.204374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.204550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.204574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.204753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.204796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.204982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.205038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.205254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.205304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.205552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.205602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.205793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.205853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.206102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.206150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.206357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.206408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.206585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.206615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.206820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.206875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.207047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.059 [2024-07-12 17:14:02.207095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.059 qpair failed and we were unable to recover it. 00:25:03.059 [2024-07-12 17:14:02.207288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.207334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.207568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.207592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.207858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.207907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.208078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.208102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.208303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.208328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.208552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.208576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.208834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.208895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.209103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.209154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.209401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.209451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.209640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.209859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.209885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.210129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.210177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.210419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.210468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.210672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.210696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.210936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.210962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.211096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.211155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.211401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.211450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.211642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.211669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.211814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.211837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.212030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.212087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.212338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.212388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.212614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.212638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.212891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.212950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.213191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.213241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.213475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.213620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.213843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.213906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.214076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.214130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.214323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.214371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.214561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.214584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.214770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.214811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.215074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.215126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.215284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.215331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.215564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.215587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.215808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.215860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.216073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.216120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.216315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.216364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.216615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.216640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.216906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.216956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.217166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.217217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.217453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.217502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.217734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.217764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.217980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.218043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.218241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.218288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.060 [2024-07-12 17:14:02.218483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.060 [2024-07-12 17:14:02.218533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.060 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.218777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.218862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.219062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.219109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.219314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.219364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.219546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.219596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.219787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.219853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.220093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.220145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.220344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.220395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.220620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.220644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.220912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.220963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.221221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.221269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.221507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.221557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.221786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.221811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.222038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.222084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.222297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.222351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.222626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.222674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.222848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.222872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.223127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.223174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.223374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.223425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.223646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.223669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.223842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.223867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.224061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.224107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.224272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.224319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.224489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.224539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.224768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.224793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.224986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.225037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.225231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.225279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.225480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.225530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.225781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.225806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.226017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.226066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.226265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.226311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.226567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.226618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.226867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.226933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.227133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.227182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.227435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.227484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.227700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.227724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.227961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.227986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.228179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.228229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.228372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.228423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.228640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.228664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.228909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.228934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.229090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.229143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.229305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.229355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.229555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.061 [2024-07-12 17:14:02.229579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.061 qpair failed and we were unable to recover it. 00:25:03.061 [2024-07-12 17:14:02.229832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.229883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.230123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.230177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.230347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.230398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.230602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.230626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.230858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.230909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.231151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.231202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.231397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.231445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.231675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.231698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.231994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.232063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.232223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.232275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.232453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.232500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.232656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.232679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.232832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.232881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.233043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.233089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.233243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.233301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.233448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.233471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.233639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.233676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.233851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.233902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.234105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.234298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.234462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.234638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.234802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.234979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.235163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.235386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.235587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.235723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.235930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.235996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.236146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.236198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.236349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.236398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.236539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.236577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.236699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.236723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.236859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.236884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.237050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.237085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.237278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.237301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.237446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.237470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.237614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.237651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.237887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.238928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.238953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.239094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.062 [2024-07-12 17:14:02.239133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.062 qpair failed and we were unable to recover it. 00:25:03.062 [2024-07-12 17:14:02.239263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.239301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.239457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.239496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.239670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.239832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.239871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.239964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.239997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.240127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.240150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.240336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.240359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.240492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.240516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.240664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.240702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.240848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.240873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.241845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.241884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.242844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.242869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.243873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.243989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.244845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.244986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.245850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.245970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.246000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.246177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.246228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.063 [2024-07-12 17:14:02.246341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.063 [2024-07-12 17:14:02.246365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.063 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.246510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.246534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.246651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.246676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.246799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.246824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.246916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.246960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.247899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.247923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.248900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.248924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.249948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.249972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.250892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.250917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.251923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.251970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.252932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.252988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.253136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.253188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.253334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.253357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.253522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.253562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.253709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.253733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.253918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.253971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.254165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.254214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.064 [2024-07-12 17:14:02.254353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.064 [2024-07-12 17:14:02.254406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.064 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.254544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.254582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.254728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.254808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.254957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.254982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.255913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.255951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.256860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.256884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.257851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.257877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.258893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.258918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.259113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.259324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.259511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.259682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.259977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.260196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.260369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.260500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.260671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.260805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.260892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.261872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.261996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.262019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.262153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.262191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.262318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.262342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.262447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.262470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.065 qpair failed and we were unable to recover it. 00:25:03.065 [2024-07-12 17:14:02.262613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.065 [2024-07-12 17:14:02.262636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.262746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.262775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.262891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.262916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.263906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.263931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.264850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.265106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.265305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.265458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.265612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.265791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.265978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.266208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.266396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.266582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.266770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.266932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.267932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.267956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.268123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.268171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.268313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.268336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.268471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.268495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.268611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.268634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.268825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.268880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.269946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.269970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.270115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.270152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.270240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.270279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.066 qpair failed and we were unable to recover it. 00:25:03.066 [2024-07-12 17:14:02.270397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.066 [2024-07-12 17:14:02.270421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.270551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.270575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.270711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.270755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.270894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.270950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.271179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.271358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.271497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.271680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.271840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.271999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.272909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.272999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.273961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.273986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.274180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.274229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.274377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.274400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.274535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.274559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.274722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.274752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.274899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.274950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.275109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.275157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.275341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.275392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.276192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.276217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.276354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.276378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.276509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.276533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.276676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.276701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.276852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.276876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.277961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.277985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.278841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.278997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.279036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.279163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.279201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.279329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.279353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.279515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.279554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.279699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.067 [2024-07-12 17:14:02.279722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.067 qpair failed and we were unable to recover it. 00:25:03.067 [2024-07-12 17:14:02.279835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.279860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.279970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.279995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.280896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.280921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.281937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.281961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.282167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.282342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.282493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.282657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.282862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.282978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.283180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.283404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.283550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.283715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.283874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.283902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.284890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.284915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.285971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.285995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.286948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.286973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.287149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.287205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.287337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.287360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.287541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.287579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.287714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.287742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.068 [2024-07-12 17:14:02.287879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.068 [2024-07-12 17:14:02.287937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.068 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.288089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.288139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.288290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.288342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.288472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.288509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.288675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.288713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.288889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.288912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.289910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.289935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.290864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.290889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.291959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.292919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.292943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.293925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.294853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.294991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.295892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.069 [2024-07-12 17:14:02.295915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.069 qpair failed and we were unable to recover it. 00:25:03.069 [2024-07-12 17:14:02.296033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.296200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.296358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.296539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.296698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.296846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.296870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.297965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.297989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.298883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.298999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.299192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.299314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.299502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.299686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.299887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.299948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.300949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.300974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.301858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.301909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.302930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.302955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.303115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.303318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.303448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.303637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.070 [2024-07-12 17:14:02.303810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.070 qpair failed and we were unable to recover it. 00:25:03.070 [2024-07-12 17:14:02.303895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.303919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.304844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.304868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.305847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.305986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.306967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.306991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.307972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.307997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.308934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.308958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.309909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.309933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.310886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.310910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.311912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.311936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.312067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.312091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.312207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.071 [2024-07-12 17:14:02.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.071 qpair failed and we were unable to recover it. 00:25:03.071 [2024-07-12 17:14:02.312393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.312430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.312549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.312586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.312682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.312706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.312875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.313959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.313983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.314836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.314982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.315162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.315370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.315492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.315679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.315893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.315949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.316165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.316212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.316360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.316575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.316598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.316720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.316749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.316890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.316944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.317895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.317920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.318972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.318997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.319941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.319991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.320891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.320914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.321072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.321275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.321399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.321581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.072 [2024-07-12 17:14:02.321691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.072 qpair failed and we were unable to recover it. 00:25:03.072 [2024-07-12 17:14:02.321804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.321828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.321988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.322941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.322966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.323942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.323966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.324885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.324924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.325855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.325879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.326956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.326981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.327915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.327939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.328891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.328980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.329003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.329131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.329154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.329291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.329315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.329481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.329505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.073 [2024-07-12 17:14:02.329638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.073 [2024-07-12 17:14:02.329676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.073 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.329819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.329844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.329981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.330162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.330361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.330549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.330701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.330862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.330918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.331936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.331960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.332117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.332174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.332301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.332364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.332536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.332565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.332745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.332769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.332924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.333102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.333163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.333369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.333419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.333553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.333579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.333706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.333730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.333845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.333868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.334965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.334990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.335869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.335892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.336966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.336990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.337189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.337376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.337526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.337683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.337986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.338907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.338931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.339098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.339246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.339284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.074 [2024-07-12 17:14:02.339392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.074 [2024-07-12 17:14:02.339428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.074 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.339543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.339568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.339703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.339729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.339916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.339982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.340194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.340259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.340461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.340526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.340716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.340745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.340877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.340902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.341061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.341126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.341360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.341425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.341594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.341660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.341849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.342013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.342052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.342209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.342275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.342472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.342536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.342716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.342761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.342919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.342945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.343068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.343143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.343317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.343384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.343581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.343858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.343884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.343971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.343997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.344096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.344120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.344293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.344358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.344588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.344654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.344853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.344878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.345051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.345116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.345349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.345413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.345624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.345689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.345902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.345927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.346016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.346041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.346209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.346285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.346482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.346547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.346764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.346790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.346951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.346977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.347123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.347188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.347358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.347420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.347639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.347704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.347914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.347939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.348033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.348058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.348169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.348193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.348322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.348386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.348626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.348691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.348942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.348967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.349111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.349174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.349418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.349483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.349709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.349805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.349950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.349976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.350125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.350164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.350322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.350388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.350653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.350717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.350942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.350967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.351190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.351225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.351389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.351455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.351683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.351765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.351925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.351959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.352062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.352085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.352180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.075 [2024-07-12 17:14:02.352207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.075 qpair failed and we were unable to recover it. 00:25:03.075 [2024-07-12 17:14:02.352343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.352395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.352680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.352761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.352914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.352938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.353115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.353413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.353478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.353689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.353713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.353882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.353906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.354110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.354181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.354402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.354466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.354677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.354709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.354876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.354901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.355011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.355096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.355365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.355429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.355654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.355719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.355934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.355958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.356201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.356268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.356456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.356521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.356774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.356819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.356998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.357042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.357212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.357287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.357537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.357601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.357915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.357961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.358132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.358188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.358405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.358481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.358780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.358845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.359062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.359119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.359258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.359314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.359536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.359601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.359843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.359910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.360148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.360194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.360363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.360418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.360654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.360717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.360961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.361007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.361190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.361239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.361437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.361485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.361700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.361814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.361983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.362032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.362263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.362312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.362489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.362537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.362714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.362828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.363021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.363081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.363308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.363365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.363551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.363599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.363811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.363862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.364063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.364139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.364469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.364532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.364796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.364849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.365048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.365112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.365309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.365373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.365571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.365633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.365907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.365941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.366099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.366133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.366293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.366328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.076 [2024-07-12 17:14:02.366532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.076 [2024-07-12 17:14:02.366567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.076 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.366786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.366862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.367015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.367075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.367263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.367328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.367675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.367957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.368008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.368248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.368300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.368472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.368535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.368717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.368795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.368963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.368997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.369141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.369176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.369364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.369409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.369635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.369673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.369823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.369858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.369996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.370030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.370218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.370252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.370400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.370436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.370666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.370700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.370823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.370969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.371204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.371404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.371590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.371804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.371951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.371985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.372220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.372254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.372393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.372432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.372587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.372621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.372763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.372813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.372928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.372960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.373190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.373227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.373394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.373428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.373591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.373624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.373778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.373812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.373921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.373954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.374114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.374147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.374306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.374348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.374580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.374617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.374756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.374790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.374928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.374961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.375115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.375148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.375335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.375367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.375575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.375607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.375798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.375846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.375949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.375981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.376163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.376206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.376372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.376403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.376558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.376590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.376760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.376793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.376924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.376955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.377102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.377133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.377286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.377317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.377528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.377571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.377713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.377760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.377889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.377921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.378052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.378083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.378251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.378282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.378421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.378459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.378697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.378732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.378886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.378916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.379020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.379050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.379274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.077 [2024-07-12 17:14:02.379305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.077 qpair failed and we were unable to recover it. 00:25:03.077 [2024-07-12 17:14:02.379404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.379435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.379549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.379580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.379730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.379770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.379902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.379931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.380076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.380110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.380339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.380375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.380531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.380561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.380769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.380801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.380929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.380959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.381138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.381286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.381489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.381690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.381871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.381992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.382169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.382409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.382593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.382760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.382932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.382961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.383926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.383953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.384857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.384980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.385895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.385922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.386949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.386976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.387861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.387983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.388868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.388895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.389014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.389040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.078 [2024-07-12 17:14:02.389139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.078 [2024-07-12 17:14:02.389165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.078 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.389888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.389915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.390890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.390916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.391897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.391923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.392936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.392961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.393918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.393944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.394857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.394989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.395877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.395980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.396840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.396866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.079 [2024-07-12 17:14:02.397041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.079 [2024-07-12 17:14:02.397081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.079 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.397266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.397435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.397545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.397719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.397901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.397992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.398944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.398985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.399948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.399974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.400896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.400922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.401908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.401933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.402894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.402987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.403881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.403908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.404974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.404999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.405874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.405996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.406893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.406920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.407068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.080 [2024-07-12 17:14:02.407093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.080 qpair failed and we were unable to recover it. 00:25:03.080 [2024-07-12 17:14:02.407253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.407381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.407509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.407642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.407797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.407964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.407989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.408883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.408909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.409891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.409932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.410819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.410975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.411919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.411945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.412950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.412976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.413921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.413963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.414105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.414150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.414308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.414353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.414541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.414587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.414756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.414805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.414946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.414986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.415108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.415154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.415293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.415339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.415508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.415553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.415749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.415801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.415916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.415943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.416077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.416116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.416256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.416282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.081 [2024-07-12 17:14:02.416443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.081 [2024-07-12 17:14:02.416486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.081 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.416608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.416651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.416808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.416836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.416924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.416950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.417091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.417117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.417248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.417291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.417448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.417492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.417644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.417687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.417884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.417915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.418857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.418882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.419945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.419986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.420108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.420132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.420327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.420352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.420543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.420586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.420712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.420768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.420935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.420961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.421078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.421238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.421447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.421678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.421884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.421998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.422178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.422350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.422549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.422767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.422945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.422986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.423110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.423135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.423346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.423371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.423547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.423591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.423758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.423806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.423965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.423990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.424898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.424924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.425047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.425077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.425221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.425265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.425403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.425446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.425605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.425648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.425803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.425830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.426909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.426949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.427069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.427127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.427283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.427327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.427455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.427498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.427637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.427680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.427861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.082 [2024-07-12 17:14:02.427906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.082 qpair failed and we were unable to recover it. 00:25:03.082 [2024-07-12 17:14:02.428062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.428106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.428240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.428283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.428459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.428502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.428652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.428696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.428862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.428907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.429087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.429130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.429259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.429303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.429433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.429476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.429626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.429669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.429870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.430113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.430319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.430483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.430684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.430859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.430994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.431158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.431322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.431501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.431692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.431935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.431978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.432135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.432178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.432344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.432387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.432537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.432580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.432784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.432836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.432968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.433137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.433300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.433498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.433668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.433916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.433944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.434896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.434923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.435029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.435056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.435180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.435207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.435369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.435412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.435592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.435635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.435797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.435840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.436939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.436982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.437145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.437188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.437396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.437437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.437587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.437634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.437784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.437827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.437942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.437984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.438130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.438171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.438323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.438364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.438533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.438573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.438729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.438767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.438871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.438901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.439056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.439085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.439211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.439268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.439419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.439460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.439671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.439712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.439877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.440040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.440071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.440207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-07-12 17:14:02.440260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-07-12 17:14:02.440446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.440487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.440701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.440853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.440885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.441090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.441246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.441441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.441608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.441864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.441970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.442141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.442398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.442614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.442816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.442958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.442989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.443189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.443230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.443381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.443421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.443571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.443612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.443762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.443813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.443976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.444154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.444311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.444506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.444698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.444931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.444962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.445153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.445315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.445485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.445673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.445854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.445981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.446176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.446351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.446535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.446725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.446923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.446954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.447053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.447085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.447270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.447310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.447441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.447491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.447643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.447683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.447850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.447883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.448839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.448970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.449236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.449369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.449548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.449766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.449947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.449979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.450136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.450171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.450310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.450345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.450449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.450485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.450658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.450693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.450830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.450863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.451089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.451281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.451475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.451643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-07-12 17:14:02.451832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-07-12 17:14:02.451969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.452112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.452308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.452498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.452660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.452872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.452904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.453093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.453250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.453419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.453611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.453831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.453986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.454178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.454338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.454529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.454720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.454909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.454940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.455076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.455280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.455322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.455469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.455511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.455669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.455710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.455900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.455932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.456099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.456139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.456321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.456362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.456480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.456520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.456644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.456879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.456911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.457056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.457097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.457273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.457314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.457461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.457503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.457684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.457725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.457894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.457926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.458091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.458261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.458451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.458612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.458860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.458970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.459135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.459319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.459536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.459695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.459895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.459926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.460054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.460101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.460241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.460281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-07-12 17:14:02.460431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-07-12 17:14:02.460472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.460620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.460661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.460822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.460854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.460956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.460988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.461122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.461154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.461315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.461356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.461482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.461523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.461697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.461758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.461920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.461952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.462096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.462137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.462308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.462350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.462495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.462536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.462669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.462711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.462940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.462972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.463100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.463141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.463318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.463359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.463485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.463526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.463662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.463704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.463854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.463886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.464044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.464290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.464459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.464632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.464853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.464981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.465187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.465371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.465565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.465795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.465928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.465960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.466091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.466133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.466339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.466381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.466531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.466572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.466725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.466789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.466922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.466954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.467116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.467147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.467289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.467330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.467505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.467546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.467730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.467800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.467961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.467992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-07-12 17:14:02.468092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-07-12 17:14:02.468143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.468266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.468307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.468503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.468544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.468666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.468708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.468879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.468911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.469057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.469126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.469322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.469383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.469564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.469605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.469792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.469825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.469952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.469984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.470147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.470188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.470336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.470378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.470509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.470550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.470698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.470749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.470934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.470966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.471073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.471105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.471262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.471316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.471465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.471506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.471657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.471698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.471849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.471882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.472903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.472934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.473092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.473310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.473485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.473699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.473899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.473999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.474030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.474173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.474204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.474383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.474423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.474598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.474639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.474771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.474820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.475066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.475285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.475501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.475666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-07-12 17:14:02.475851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-07-12 17:14:02.475978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.476166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.476326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.476499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.476665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.476888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.476920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.477914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.477946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.478061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.478115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.478260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.478301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.478476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.478517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.478665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.478707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.478922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.478986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.479185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.479228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.479353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.479395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.479524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.479564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.479722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.479776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.479901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.479942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.480084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.480124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.480248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.480288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.480444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.480509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.480658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.480700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.480863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.480905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.481091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.481153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.481288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.481367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.481553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.481595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.481774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.481816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.481960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.482176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.482217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-07-12 17:14:02.482347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-07-12 17:14:02.482388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.482537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.482578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.482729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.482780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.482931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.482973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.483099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.483147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.483271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.483312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.483497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.483539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.483686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.483728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.483866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.483908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.484050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.484092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.484214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.484255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.484487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.484529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.484677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.484719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.484909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.484951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.485126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.485316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.485357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.485531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.485573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.485720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.485788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.485922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.485964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.486163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.486203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.486350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.486391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.486538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.486579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.486729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.486796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.486948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.486990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.487123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.487164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.487319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.487360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.487482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.487523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.487636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.487677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.487838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.487881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.488066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.488229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.488445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.488639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.488845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.488991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.489153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.489333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.489537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.489736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.489935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.489976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.490119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-07-12 17:14:02.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-07-12 17:14:02.490282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.490323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.490483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.490525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.490684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.490725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.490884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.490931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.491076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.491118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.491270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.491312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.491491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.491532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.491678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.491719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.491907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.491949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.492122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.492163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.492308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.492348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.492519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.492560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.492715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.492900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.492941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.493202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.493243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.493395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.493591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.493632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.493776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.493819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.493969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.494025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.494246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.494298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.494505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.494557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.494719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.494769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.494946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.494987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.495141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.495192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.495385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.495445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.495613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.495654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.495867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.495930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.496151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.496214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.496439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.496498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.496639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.496690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.496903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.496965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.497142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.497204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.497429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.497470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.497679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.497730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.497923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.497985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.498165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.498224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.498377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.498418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.498580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.498621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.498849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.498902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.499056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.499107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-07-12 17:14:02.499322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-07-12 17:14:02.499363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.499521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.499567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.499787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.499830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.500012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.500053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.500226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.500289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.500496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.500548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.500697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.500747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.500903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.500944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.501174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.501215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.501374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.501418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.501641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.501682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.501866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.501933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.502090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.502149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.502354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.502415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.502622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.502835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.502903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.503074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.503134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.503274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.503326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.503439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.503480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.503656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.503707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.503933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.503975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.504159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.504227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.504376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.504427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.504556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.504597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.504782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.504824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.505004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.505064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.505262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.505324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.505513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.505563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.505756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.505826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.506041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.506083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.506294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.506368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.506576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.506627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.506816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.506890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.507074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.507139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.507357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.507398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.507581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.507630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.507846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.507905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.508092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.508154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.508304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.508365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.508652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-07-12 17:14:02.508870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-07-12 17:14:02.508937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.509120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.509181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.509302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.509354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.509584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.509625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.509832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.509894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.510088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.510135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.510286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.510353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.510530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.510580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.510757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.510799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.511006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.511055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.511189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.511261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.511420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.511470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.511637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.511677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.511876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.511918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.512075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.512116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.512309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.512360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.512519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.512560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.512803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.512857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.512993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.513045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.513247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.513288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.513470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.513510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.513661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.513702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.513916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.513988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.514213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.514274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.514419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.514460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.514635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.514676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.514830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.514882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.515020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.515061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.515233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.515274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.515450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.515491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.515754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.515807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.515966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.516007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.516176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.516217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.516422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.516474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.516623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-07-12 17:14:02.516664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-07-12 17:14:02.516862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.516905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.517073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.517135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.517347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.517578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.517631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.517792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.517845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.518013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.518079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.518211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.518289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.518441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.518482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.518633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.518674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.518831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.518872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.519082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.519123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.519308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.519350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.519531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.519571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.519759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.519805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.520001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.520042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.520200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.520269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.520427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.520477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.520638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.520679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.520846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.520925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.521094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.521157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.521378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.521418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.521583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.521624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.521821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.521875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.522074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.522116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.522248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.522298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.522471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.522512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.522720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.522770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.522935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.522994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.523209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.523269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.523476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.523528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.523688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.523729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.523901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.524154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.524195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.524358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.524419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.524574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.524625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.524815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.524867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.525025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.525067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.525257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.525309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.525497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.525546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.525733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.525805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-07-12 17:14:02.525967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-07-12 17:14:02.526008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.526167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.526394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.526435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.526617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.526658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.526814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.527057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.527119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.527269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.527332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.527541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.527583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.527746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.527789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.527981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.528051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.528394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.528460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.528612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.528653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.528818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.528880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.529057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.529126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.529289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.529349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.529506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.529548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.529665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.529706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.529851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.529922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.530199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.530261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.530472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.530513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.530718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.530781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.530970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.531041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.531236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.531278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.531444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.531508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.531671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.531712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.531963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.532005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.532159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.532200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.532408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.532452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.532635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.532684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.532855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.532898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.533115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.533156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.533342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.533390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.533573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.533614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.533731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.533793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.534007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.534084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.534245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.534323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.534527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.534568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.534707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.534773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.534947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.535009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.535166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.535227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.535386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-07-12 17:14:02.535426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-07-12 17:14:02.535566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.535618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.535825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.535876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.536054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.536095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.536282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.536333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.536565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.536606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.536760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.536802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.537005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.537078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.537238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.537300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.537484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.537525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.537684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.537725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.537926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.537985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.538161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.538222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.538427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.538468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.538635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.538675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.538895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.538948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.539152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.539193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.539347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.539423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.539641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.539682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.539879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.539939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.540115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.540175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.540350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.540410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.540583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.540623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.540812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.540855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.540980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.541021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.541230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.541273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.541413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.541463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.541651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.541691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.541836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.541878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.542015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.542056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.542191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.542238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.542475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.542515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.542649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.542691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.542948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.542990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.543140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.543181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.543360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.543408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.543566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.543618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.543782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.543824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.543987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.544027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.544219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.544260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.544465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.544517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-07-12 17:14:02.544731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-07-12 17:14:02.544794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.544986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.545058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.545244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.545308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.545485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.545525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.545729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.545782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.545953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.546021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.546176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.546239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.546396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.546437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.546658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.546699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.546940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.546981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.547195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.547258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.547406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.547480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.547636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.547688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.547881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.547945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.548079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.548148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.548311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.548376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.548509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.548551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.548706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.548777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.548932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.548983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.549183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.549223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.549406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.549446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.549639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.549692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.549943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.550004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.550222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.550263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.550420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.550481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.550613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.550666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.550835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.550908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.551122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.551184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.551377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.551439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.551597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.551648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.551877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.551940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.552107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.552148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.552377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.552427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.552674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.552716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.552878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.553108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.553149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.553307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.553359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.553543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.553583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.553747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.553789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.553985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.554026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.554149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-07-12 17:14:02.554190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-07-12 17:14:02.554369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.554410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.554573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.554614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.554765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.554814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.555045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.555107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.555276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.555346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.555556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.555608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.555729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.555787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.556045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.556085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.556292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.556352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.556557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.556598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.556782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.556825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.557032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.557104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.557266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.557447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.557489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.557670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.557710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.557943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.558181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.558248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.558405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.558445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.558562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.558613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.558913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.558955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.559211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.559252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.559405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.559458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.559588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.559640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.559815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.559868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.560076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.560117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.560311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.560371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.560557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.560598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.560761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.560803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.561030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.561104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.561321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.561381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.561588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.561640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.561819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.561881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.562054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.562117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.562319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.562376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.562530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.562571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-07-12 17:14:02.562804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-07-12 17:14:02.562870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.563066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.563129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.563319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.563389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.563544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.563595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.563773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.563815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.563998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.564061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.564217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.564286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.564571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.564611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.564820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.564873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.565054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.565107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.565291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.565332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.565456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.565497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.565758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.565811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.565950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.566016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.566239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.566299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.566465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.566527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.566683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.566727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.566967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.567029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.567226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.567290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.567501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.567561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.567772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.567837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.568053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.568123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.568336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.568555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.568596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.568769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.568811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.568977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.569044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.569246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.569309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.569470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.569522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.569731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.569793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.569950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.570000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.570188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.570238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.570454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.570506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.570638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.570691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.570870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.570921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.571080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.571131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.571285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.571325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.571522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.571563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.571726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.571789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.571925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.571972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.572097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.572138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-07-12 17:14:02.572345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-07-12 17:14:02.572387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.572535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.572576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.572791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.572834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.573124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.573182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.573437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.573497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.573652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.573701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.573909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.573980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.574182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.574240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.574406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.574469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.574611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.574651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.574870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.574934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.575177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.575219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.575515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.575583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.575721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.575782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.576008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.576078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.576231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.576292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.576464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.576513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.576720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.576782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.576954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.577021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.577192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.577251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.577416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.577477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.577660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.577701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.577855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.577896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.578035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.578076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.578331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.578383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.578549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.578600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.578775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.578823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.579055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.579108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.579318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.579370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.579505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.579556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.579784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.579827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.580015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.580089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.580234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.580296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.580504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.580557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.580747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.580796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.581004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.581069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.581283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.581345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.581481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.581532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.581670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.581717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.581940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.582000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.582165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-07-12 17:14:02.582224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-07-12 17:14:02.582382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.582446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.582582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.582635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.582803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.582846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.583018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.583059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.583347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.583388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.583553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.583595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.583756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.583798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.583915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.583956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.584195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.584236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.584373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.584425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.584653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.584694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.584900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.584943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.585129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.585189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.585345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.585386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.585501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.585543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.585763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.585816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.585983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.586054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.586191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.586259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.586480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.586543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.586671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.586712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.586891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.586963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.587169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.587210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.587337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.587378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.587537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.587578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.587734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.587794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.587964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.588016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.588180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.588220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.588349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.588390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.588603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.588654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.588886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.588947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.589230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.589290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.589430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.589470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.589684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.589725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.589916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.589977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.590284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.590355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.590561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.590602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.590803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.590873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.591088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.591151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.591309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.591372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.591528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.591578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-07-12 17:14:02.591715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-07-12 17:14:02.591765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.591957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.592007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.592127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.592168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.592375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.592416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.592539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.592589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.592820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.592863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.593040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.593081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.593247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.593288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.593478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.593519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.593729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.593782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.594028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.594069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.594236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.594280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.594437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.594478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.594686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.594728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.594947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.595009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.595168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.595231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.595382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.595443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.595639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.595679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.595948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.596107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.596160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.596370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.596422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.596585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.596626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.596893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.597043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.597105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.597281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.597323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.597480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.597521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.597677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.597729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.597907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.597948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.598102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.598143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.598308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.598349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.598506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.598547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.598806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.598859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.598992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.599033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.599253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.599300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.599455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.599496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-07-12 17:14:02.599765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-07-12 17:14:02.599808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.599966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.600017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.600141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.600189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.600411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.600475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.600720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.600929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.600989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.601159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.601223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.601385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.601459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.601627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.601668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.601863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.601914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.602090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.602155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.602327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.602388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.602555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.602596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.602783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.602838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.603133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.603174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.603377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.603426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.603610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.603662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.603821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.603864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.604051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.604092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.604245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.604291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.604511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.604552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.604730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.604780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.604997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.605072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.605276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.605337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.605503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.605563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.605811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.605853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.606067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.606139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.606274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.606340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.606512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.606557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.606806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.606848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.607054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.607114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.607258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.607321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.607453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.607503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.607689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.607751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.607921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.607980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.608139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.608180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.608319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.608371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.608599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.608650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.608798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.608841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.609009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.609062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.609249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-07-12 17:14:02.609299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-07-12 17:14:02.609512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.609564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.609752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.609808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.609974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.610036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.610225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.610284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.610433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.610475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.610599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.610640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.610895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.610959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.611181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.611222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.611359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.611411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.611574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.611616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.611771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.611825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.612012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.612074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.612273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.612332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.612514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.612555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.612704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.612755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.612959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.613019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.613182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.613246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.613405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.613452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.613609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.613651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.613844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.613907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.614121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.614186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.614424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.614481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.614640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.614681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.614932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.614975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.615108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.615175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.615422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.615463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.615760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.615802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.616000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.616062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.616282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.616347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.616484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.616525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.616818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.616860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.617022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.617090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.617236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.617308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.617526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.617587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.617765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.617807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.617927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.617968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.618171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.618218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.618417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.618481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.618635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.618677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.618958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.619001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-07-12 17:14:02.619230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-07-12 17:14:02.619289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.619456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.619518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.619819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.619861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.620025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.620087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.620299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.620362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.620556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.620597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.620851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.620894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.621238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.621303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.621486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.621546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.621758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.621806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.622004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.622085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.622247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.622309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.622453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.622505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.622729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.622803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.622966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.623033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.623230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.623291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.623445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.623493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.623710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.623763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.623946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.623987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.624152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.624193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.624375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.624416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.624621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.624831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.624893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.625055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.625096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.625219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.625260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.625574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.625634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.625862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.625904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.626183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.626242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.626556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.626625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.626786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.626834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.627051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.627110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.627272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.627333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.627492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.627537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.627696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.627747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.627939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.628000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.628190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.628251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.628463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.628525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.628677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.628729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.628904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.628956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.629189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.629241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.629358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-07-12 17:14:02.629399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-07-12 17:14:02.629613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.629654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.629829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.629891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.630122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.630187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.630386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.630446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.630611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.630652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.630881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.630942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.631110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.631169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.631387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.631446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.631622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.631664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.631893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.631951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.632114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.632175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.632387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.632445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.632624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.632665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.632826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.632889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.633084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.633311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.633501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.633659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.633816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.633988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.634055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.634218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.634281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.634415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.634456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.634626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.634666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.634836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.634909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.635077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.635145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.635263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.635303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.635421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.635462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.635661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.635708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.635897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.635938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.636081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.636121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.636296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.636336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.636472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.636512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.636689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.636730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.636886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.636957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.637125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.637187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-07-12 17:14:02.637307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-07-12 17:14:02.637348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.637500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.637541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.637688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.637729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.637892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.637933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.638052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.638093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.638225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.638266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.638415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.638455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.638639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.638680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.638852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.638895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.639082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.639298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.639486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.639642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.639836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.639985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.640178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.640342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.640497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.640689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.640872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.640914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.641026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.641067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.641214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.641255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.641405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.641446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.641594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.641812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.641855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.642910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.642951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.643112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.643152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.643319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.643366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.643494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.643536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.643687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.643728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.643862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.643903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.644021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.644062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.644247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.644288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.644474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.644515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.644632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.644673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.644882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-07-12 17:14:02.644925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-07-12 17:14:02.645117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.645177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.645324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.645365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.645527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.645568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.645709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.645811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.645978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.646040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.646200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.646241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.646373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.646413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.646597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.646638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.646797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.646849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.647039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.647081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.647231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.647272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.647441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.647482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.647653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.647694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.647875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.647937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.648059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.648101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.648254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.648322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.648446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.648487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.648616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.648657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.648849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.648891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.649045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.649086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.649233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.649274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.649399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.649440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.649610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.649651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.649828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.649870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.650047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.650120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.650283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.650324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.650473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.650514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.650693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.650734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.650902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.651130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.651194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.651346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.651387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.651511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.651559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.651701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.651754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.651934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.651975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.652097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.652139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.652338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.652379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.652537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.652578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.652723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.652778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.652928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.652969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.653082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.653242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-07-12 17:14:02.653283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-07-12 17:14:02.653452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.653493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.653666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.653707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.653871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.653913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.654064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.654105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.654257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.654299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.654429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.654471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.654621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.654662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.654839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.654882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.655059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.655100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.655254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.655295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.655474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.655515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.655663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.655704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.655913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.655955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.656098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.656139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.656285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.656326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.656442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.656484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.656632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.656672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.656849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.656892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.657944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.657985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.658140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.658180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.658306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.658347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.658527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.658568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.658772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.658814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.658964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.659005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.659179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.659220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.659346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.659394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.659579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.659620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.659789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.659859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.660026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.660093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.660238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.660279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.660408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.660450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.660628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.660670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.660804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.660846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.661015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.661075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.661209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.661251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.108 qpair failed and we were unable to recover it. 00:25:03.108 [2024-07-12 17:14:02.661398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.108 [2024-07-12 17:14:02.661438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.661597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.661639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.661789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.661831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.661944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.662142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.662183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.662308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.662349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.662493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.662534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.662678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.662719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.662915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.662956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.663073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.663115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.663258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.663299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.663444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.663485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.663605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.663646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.663859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.663902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.664077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.664118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.664262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.664303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.664479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.664520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.664640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.664681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.664880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.664942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.665102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.665297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.665504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.665545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.665664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.665704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.665837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.665879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.666034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.666075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.666227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.666268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.666419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.666460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.666619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.666660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.666813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.666855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.667029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.667071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.667198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.667245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.667413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.667462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.667608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.667650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.667805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.667848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.668034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.668075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.668222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.668264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.668414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.668456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.668638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.668679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.668827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.668897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.669041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.669198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.109 [2024-07-12 17:14:02.669240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.109 qpair failed and we were unable to recover it. 00:25:03.109 [2024-07-12 17:14:02.669390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.669430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.669602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.669643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.669781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.669823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.670006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.670047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.670196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.670237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.670383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.670424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.670602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.670644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.670833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.670900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.671038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.671111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.671285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.671326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.671442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.671483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.671720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.671774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.671910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.671978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.672121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.672184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.672334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.672375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.672523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.672564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.672684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.672725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.672884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.672925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.673126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.673167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.673320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.673361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.673518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.673560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.673718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.673772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.673930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.673971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.674110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.674151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.674301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.674343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.674468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.674509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.674655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.674696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.674898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.674940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.675087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.675129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.675307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.675354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.675503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.675544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.675694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.675734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.675873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.675915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.676066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.676107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.676235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.676276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.110 qpair failed and we were unable to recover it. 00:25:03.110 [2024-07-12 17:14:02.676510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.110 [2024-07-12 17:14:02.676550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.676699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.676753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.676938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.676980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.677141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.677181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.677357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.677398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.677546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.677587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.677747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.677789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.677940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.677981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.678182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.678223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.678376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.678416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.678566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.678606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.678787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.678830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.678986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.679158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.679371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.679560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.679722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.679933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.679974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.680096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.680137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.680315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.680356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.680508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.680549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.680706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.680755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.680878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.680920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.681100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.681314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.681355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.681503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.681544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.681715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.681766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.681911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.681952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.682102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.682143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.682318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.682360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.682507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.682547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.682698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.682751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.682884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.682925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.683103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.683143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.683292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.683339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.683486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.683527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.683700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.683752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.683933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.683974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.684124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.684165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.684340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.684381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.684559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.111 [2024-07-12 17:14:02.684600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.111 qpair failed and we were unable to recover it. 00:25:03.111 [2024-07-12 17:14:02.684759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.684800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.684977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.685018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.685167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.685207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.685331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.685371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.685549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.685589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.685763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.685806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.685992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.686051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.686250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.686310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.686461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.686502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.686627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.686668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.686855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.686916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.687082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.687144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.687272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.687313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.687465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.687506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.687685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.687726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.687913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.687955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.688096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.688137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.688251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.688293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.688468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.688510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.688660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.688863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.688905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.689025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.689066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.689242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.689283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.689429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.689470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.689617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.689659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.689840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.689905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.690070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.690132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.690285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.690325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.690501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.690542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.690719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.690769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.690929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.690989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.691184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.691247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.691461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.691525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.691647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.691703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.691870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.691913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.692061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.692102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.692281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.692322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.692530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.692573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.692707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.692757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.692903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.112 [2024-07-12 17:14:02.693184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.112 [2024-07-12 17:14:02.693247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.112 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.693398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.693439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.693595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.693636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.693865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.693924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.694068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.694138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.694331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.694372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.694557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.694771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.694813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.695015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.695080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.695273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.695335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.695486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.695527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.695723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.695776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.695916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.695983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.696162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.696231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.696380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.696441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.696596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.696637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.696856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.696910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.697055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.697097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.697273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.697314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.697519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.697560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.697819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.697888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.698033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.698102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.698271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.698330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.698487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.698528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.698733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.698793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.698974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.699016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.699214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.699275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.699486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.699755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.699798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.699997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.700063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.700256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.700317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.700508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.700571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.700757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.700800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.700940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.701235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.701294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.701421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.701462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.701619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.701660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.701855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.701925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.702091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.702153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.702317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.702386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.702593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.702644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.113 qpair failed and we were unable to recover it. 00:25:03.113 [2024-07-12 17:14:02.702821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.113 [2024-07-12 17:14:02.702892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.703097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.703156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.703314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.703384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.703497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.703538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.703758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.703807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.703949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.704013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.704236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.704296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.704480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.704539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.704693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.704734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.704969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.705034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.705256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.705318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.705492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.705553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.705720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.705773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.705989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.706266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.706327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.706494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.706555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.706765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.706816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.707020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.707080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.707261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.707322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.707440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.707482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.707648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.707700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.707894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.707954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.708117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.708177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.708397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.708455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.708624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.708666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.708902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.708945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.709110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.709174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.709396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.709448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.709680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.709722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.709881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.709923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.710083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.710124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.710273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.710314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.710453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.710500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.710642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.710694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.710895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.710937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.711110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.114 [2024-07-12 17:14:02.711172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.114 qpair failed and we were unable to recover it. 00:25:03.114 [2024-07-12 17:14:02.711393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.711445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.711601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.711651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.711886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.711949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.712162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.712224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.712422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.712485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.712624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.712677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.712923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.712980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.713208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.713268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.713489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.713530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.713759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.713801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.714024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.714086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.714250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.714313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.714527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.714589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.714822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.714885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.715076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.715140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.715309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.715376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.715541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.715582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.715798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.715841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.716000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.716042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.716197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.716247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.716546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.716587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.716793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.716836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.716987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.717028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.717270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.717312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.717519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.717565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.717788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.717832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.718018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.718084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.718312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.718354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.718511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.718705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.718756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.718946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.718990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.719181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.719241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.719413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.719475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.719606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.719647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.719856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.720115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.720157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.720320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.720391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.720600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.720641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.720832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.720910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.721059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.115 [2024-07-12 17:14:02.721128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.115 qpair failed and we were unable to recover it. 00:25:03.115 [2024-07-12 17:14:02.721353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-07-12 17:14:02.721397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-07-12 17:14:02.721561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-07-12 17:14:02.721615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-07-12 17:14:02.721822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.116 [2024-07-12 17:14:02.721875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.116 qpair failed and we were unable to recover it. 00:25:03.116 [2024-07-12 17:14:02.722024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.722065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.722283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.722324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.722468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.722510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.722635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.722676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.722862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.722904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.723107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.723153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.723335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.723378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.723556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.723598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.723757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.723800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.724021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.724073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.724218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.724260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.724407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.724449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.724610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.724655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.724839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.724869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.725096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.725152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.725382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.725424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.725546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.725588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.725759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.725803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.725965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.726015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.726185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.726226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.726426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.726482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.726617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.726661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.726867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.726931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.727084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.727125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.727267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.727318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.727478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.727520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.727650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.727691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.727925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.727979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.728203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.728255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.728437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.728488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.728650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-07-12 17:14:02.728692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-07-12 17:14:02.728864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.728907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.729089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.729321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.729392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.729631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.729674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.729929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.729992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.730171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.730234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.730435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.730478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.730636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.730690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.730929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.730991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.731285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.731346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.731551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.731593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.731788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.731842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.732034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.732099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.732223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.732265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.732473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.732514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.732665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.732707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.732907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.732978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.733255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.733314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.733542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.733584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.733749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.733790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.733933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.733998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.734225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.734285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.734468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.734509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.734791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.734834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.735009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.735078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.735263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.735322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.735477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.735518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.735690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.735731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.735914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.735975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.736143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.736211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.736373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.736439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.736644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.736685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.736901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.736955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.737110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.737151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.737310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.737351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.737572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.737613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.737792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.737862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.738085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.738148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.738382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.738442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-07-12 17:14:02.738609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-07-12 17:14:02.738650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.738841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.738913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.739131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.739172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.739360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.739419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.739604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.739647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.739842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.739914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.740086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.740148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.740317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.740379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.740531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.740572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.740856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.740917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.741227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.741295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.741510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.741551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.741760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.741802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.741953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.741994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.742180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.742220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.742414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.742474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.742608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.742649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.742946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.743007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.743318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.743379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.743555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.743608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.743770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.743823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.744013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.744078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.744271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.744332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.744472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.744523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.744755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.744802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.744967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.745027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.745219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.745496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.745557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.745766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.745808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.745966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.746029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.746207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.746259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.746456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.746497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.746682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.746724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.746900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.746968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.747138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.747197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.747379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.747424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.747583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.747624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.747881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.747943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.748176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.748237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-07-12 17:14:02.748432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-07-12 17:14:02.748497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.748702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.748753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.748931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.748992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.749215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.749277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.749450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.749518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.749818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.749882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.750066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.750126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.750300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.750341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.750548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.750590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.750812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.750865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.751015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.751057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.751266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.751317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.751497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.751549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.751760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.751811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.751950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.751992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.752204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.752244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.752399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.752440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.752660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.752713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.752919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.752961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.753176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.753333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.753375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.753532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.753573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.753762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.753815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.753959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.754023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.754204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.754267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.754472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.754513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.754646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.754687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.754940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.754999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.755182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.755242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.755450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.755491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.755671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.755722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.755876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.755965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.756130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.756191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.756409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.756470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.756625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.756676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.756875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.756935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.757090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.757132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.757349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.757390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.757553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.757594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.757769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.757823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-07-12 17:14:02.758000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-07-12 17:14:02.758042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.758199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.758240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.758395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.758436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.758622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.758664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.758813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.758861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.759085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.759127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.759284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.759325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.759459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.759500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.759641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.759692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.759867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.759922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.760061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.760306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.760357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.760540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.760581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.760749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.760801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.760971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.761024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.761228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.761269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.761408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.761460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.761626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.761667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.761893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.761936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.762091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.762155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.762309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.762361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.762519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.762776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.762818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.763015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.763065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.763281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.763334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.763502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.763543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.763698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.763972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.764034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.764197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.764267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.764428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.764469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.764644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.764685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.764871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.764930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.765074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.765126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-07-12 17:14:02.765289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-07-12 17:14:02.765330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.765487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.765538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.765715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.765767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.765926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.765971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.766148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.766190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.766342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.766383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.766627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.766668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.766855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.767123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.767175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.767299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.767340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.767459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.767500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.767724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.767787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.767963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.768023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.768197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.768262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.768398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.768465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.768617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.768658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.768871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.768942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.769172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.769234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.769499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.769559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.769712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.769775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.769927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.770001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.770171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.770371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.770438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.770633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.770674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.770820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.770887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.771036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.771113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.771298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.771340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.771495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.771536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.771687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.771728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.771894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.771944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.772129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.772179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.772359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.772422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.772573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.772614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.772767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.773037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.773100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.773291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.773356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.773486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.773734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.773784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.773997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.774067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.774289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.774330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-07-12 17:14:02.774490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-07-12 17:14:02.774531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.774687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.774728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.774892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.774959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.775119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.775184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.775389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.775430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.775635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.775682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.775866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.775927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.776196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.776389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.776452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.776613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.776666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.776790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.776833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.777063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.777126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.777266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.777335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.777463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.777504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.777700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.777766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.777956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.778016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.778179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.778242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.778617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.778679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.778907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.778950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.779154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.779212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.779376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.779440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.779595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.779761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.779804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.780061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.780102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.780245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.780311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.780460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.780513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.780682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.780723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.780951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.780993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.781122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.781164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.781342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.781384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.781544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.781585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.781758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.781808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.781943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.781984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.782201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.782242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.782398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.782439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.782611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.782652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.782850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.782910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.783099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.783160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.783320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.783389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.783541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.783582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-07-12 17:14:02.783732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-07-12 17:14:02.783786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.783925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.783993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.784191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.784250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.784427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.784476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.784616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.784668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.784856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.784906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.785063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.785105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.785273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.785314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.785503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.785544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.785669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.785710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.785998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.786040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.786165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.786206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.786426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.786467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.786627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.786668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.786880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.786943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.787139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.787200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.787339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.787405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.787591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.787638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.787811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.787881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.788054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.788121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.788286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.788348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.788479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.788532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.788719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.788777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.788955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.788996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.789167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.789226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.789403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.789444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.789594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.789636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.789806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.789849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.790002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.790042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.790203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.790253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.790412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.790454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.790635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.790676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.790847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.790889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.791045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.791087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.791342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.791384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.791540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.791582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.791845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.791896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.792022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.792063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.792226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.792286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.792471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.792511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-07-12 17:14:02.792677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-07-12 17:14:02.792727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.792965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.793016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.793153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.793198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.793437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.793478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.793653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.793694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.793979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.794021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.794212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.794263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.794477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.794541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.794706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.794756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.794906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.794969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.795241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.795302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.795511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.795575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.795766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.795814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.795983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.796046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.796241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.796300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.796489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.796531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.796690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.796731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.796925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.796990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.797200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.797256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.797474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.797515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.797675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.797720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.797885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.797927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.798080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.798121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.798282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.798323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.798482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.798523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.798694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.798736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.798917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.798970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.799102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.799143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.799285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.799325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.799542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.799583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.799750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.799792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.799961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.800007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.800230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.800282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.800443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.800484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.800640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.800682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.800826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-07-12 17:14:02.800868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-07-12 17:14:02.801063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.801116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.801272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.801318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.801452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.801499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.801656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.801698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.801827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.801869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.802064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.802105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.802397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.802438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.802655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.802696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.802845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.802914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.803102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.803172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.803363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.803425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.803558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.803600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.803769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.803811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.804056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.804097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.804270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.804335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.804493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.804534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.804698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.804768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.804933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.805212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.805261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.805437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.805478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.805601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.805642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.805849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.805891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.806039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.806079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.806245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.806295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.806483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.806534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.806760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.806970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.807012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.807145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.807186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.807367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.807408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.807575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.807616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.807777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.807831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.808012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.808087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.808242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.808282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.808441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.808481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.808696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.808746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.808918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.808959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.809088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.809129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.809292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.809333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-07-12 17:14:02.809550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-07-12 17:14:02.809591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.809788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.809840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.809964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.810005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.810172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.810213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.810424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.810471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.810657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.810709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.810888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.810949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.811141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.811422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.811487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.811621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.811662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.811896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.811962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.812152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.812214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.812374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.812416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.812639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.812888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.812930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.813103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.813173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.813446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.813505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.813703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.813757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.813983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.814045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.814206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.814267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.814476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.814528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.814656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.814697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.814962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.815005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.815172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.815233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.815426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.815492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.815681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.815729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.815945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.815999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.816214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.816273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.816452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.816516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.816723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.816819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.817105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.817165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.817312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.817584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.817636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.817826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.817890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.818083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.818144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.818303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.818371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.818551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.818603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.818796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.818845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.819032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.819084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-07-12 17:14:02.819241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-07-12 17:14:02.819294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.819453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.819495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.819663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.819704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.819905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.819953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.820079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.820120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.820266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.820313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.820499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.820549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.820766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.820809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.820972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.821013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.821148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.821189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.821371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.821412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.821620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.821673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.821866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.821928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.822114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.822155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.822308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.822349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.822542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.822594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.822759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.822801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.822918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.822959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.823118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.823350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.823409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.823683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.823724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.823984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.824044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.824199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.824259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.824435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.824497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.824664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.824705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.824889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.824931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.825088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.825128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.825281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.825322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.825521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.825574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.825717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.825771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.825954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.826006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.826210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.826250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.826409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.826451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.826636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.826684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.826907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.826960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.827148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.827207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.827348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.827418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.827646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.827687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.827980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.828042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.828207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.828270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-07-12 17:14:02.828433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-07-12 17:14:02.828493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.828612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.828653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.828884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.829106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.829170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.829328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.829396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.829546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.829599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.829762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.829815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.830021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.830091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.830309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.830371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.830514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.830563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.830687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.830729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.830901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.830942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.831106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.831174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.831467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.831508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.831859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.831923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.832155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.832217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.832402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.832463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.832666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.832707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.832956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.832998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.833227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.833269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.833420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.833480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.833653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.833694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.833926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.834199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.834262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.834415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.834476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.834667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.834717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.834970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.835013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.835214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.835274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.835443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.835504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.835788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.835831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.836064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.836122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.836298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.836360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.836525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.836574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.836770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.836811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.836994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.837056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.837220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.837288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.837487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.837529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.837660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.837689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.837882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.837954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.838090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-07-12 17:14:02.838131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-07-12 17:14:02.838317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.838358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.838498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.838549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.838688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.838729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.838872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.838913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.839118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.839312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.839362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.839578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.839618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.839817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.839892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.840084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.840298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.840647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.840687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.840951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.841011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.841202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.841263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.841457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.841518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.841640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.841681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.841951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.842010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.842201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.842262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.842534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.842575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.842748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.842790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.842937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.843001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.843195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.843255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.843526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.843567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.843907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.843974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.844281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.844642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.844682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.844916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.844974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.845113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.845190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.845404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.845465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.845618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.845659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.845825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.845878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.846101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.846164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.846342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.846406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.846594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.846636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-07-12 17:14:02.846809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-07-12 17:14:02.846887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.847140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.847199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.847411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.847475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.847656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.847698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.847930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.847995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.848161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.848223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.848437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.848498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.848675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.848718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.848936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.848978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.849143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.849206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.849325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.849367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.849522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.849563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.849733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.849787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.850010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.850052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.850176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.850218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.850530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.850570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.850760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.850804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.851003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.851065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.851231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.851295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.851420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.851461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.851624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.851676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.851849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.851917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.852084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.852126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.852351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.852392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.852623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.852674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.852852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.852895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.853071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.853112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.853329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.853380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.853558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.853605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.853835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.853895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.854068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.854129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.854315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.854356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.854540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.854590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.854723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.854774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.854994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.855035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.855185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.855226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.855412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.855453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.855607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.855649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.855820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.855883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.856099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.856162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-07-12 17:14:02.856337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-07-12 17:14:02.856396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.856557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.856598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.856756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.856798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.857023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.857089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.857257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.857319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.857526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.857567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.857729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.857796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.857954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.857996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.858216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.858267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.858401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.858444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.858627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.858668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.858880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.858935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.859057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.859103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.859317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.859365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.859537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.859578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.859840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.859883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.860001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.860053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.860253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.860295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.860504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.860557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.860780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.860833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.861002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.861070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.861251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.861320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.861478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.861519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.861675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.861982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.862042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.862253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.862312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.862527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.862579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.862710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.862760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.862897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.862939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.863126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.863185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.863365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.863426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.863632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.863683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.863895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.863966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.864183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.864246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.864398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.864468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.864687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.864750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.864889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.864968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.865180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.865239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.865424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.865485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.865650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.865706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-07-12 17:14:02.865881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-07-12 17:14:02.865942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.866122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.866184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.866349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.866389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.866600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.866641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.866830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.867243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.867302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.867506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.867547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.867702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.867779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.867942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.868004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.868197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.868260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.868474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.868537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.868729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.868793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.868986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.869052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.869272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.869333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.869521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.869579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.869814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.869876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.870054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.870117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.870314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.870366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.870520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.870561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.870779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.870821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.870991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.871069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.871283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.871344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.871553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.871594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.871756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.871806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.871986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.872050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.872327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.872388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.872591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.872631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.872821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.872888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.873071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.873131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.873331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.873400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.873576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.873618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.873833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.873900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.874072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.874143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.874303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.874344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.874512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.874553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.874716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.874769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.874941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.874994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.875161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.875213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.875379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.875426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.875609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.875656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-07-12 17:14:02.875875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-07-12 17:14:02.875937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.876083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.876132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.876297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.876358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.876655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.876696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.876956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.877234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.877296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.877444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.877497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.877663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.877715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.877956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.878015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.878226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.878289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.878474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.878514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.878697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.878747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.878975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.879038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.879251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.879312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.879449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.879510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.879668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.879720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.879982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.880040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.880206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.880272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.880458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.880517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.880701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.880755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.880980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.881045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.881230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.881290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.881459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.881525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.881761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.881803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.881993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.882278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.882319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.882461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.882527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.882748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.882791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.883027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.883103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.883298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.883359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.883568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.883769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.883977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.884047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.884187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.884261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.884414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.422 [2024-07-12 17:14:02.884481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.422 qpair failed and we were unable to recover it. 00:25:03.422 [2024-07-12 17:14:02.884637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.884678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.884894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.884956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.885121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.885186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.885310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.885359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.885528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.885576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.885749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.885791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.885969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.886011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.886197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.886238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.886385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.886427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.886572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.886614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.886751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.886792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.886970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.887011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.887165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.887207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.887382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.887424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.887550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.887591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.887781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.887824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.887974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.888016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.888151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.888192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.888406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.888448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.888605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.888646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.888772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.888814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.888959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.889147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.889334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.889510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.889689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.889896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.889938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.890091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.890132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.890284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.890325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.890473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.890514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.890633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.890674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.890916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.890959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.891107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.891147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.891315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.891356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.891511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.891552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.891729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.891783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.891938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.891980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.892121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.892162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.892329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.892370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.892516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.423 [2024-07-12 17:14:02.892557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.423 qpair failed and we were unable to recover it. 00:25:03.423 [2024-07-12 17:14:02.892709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.892770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.892921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.892962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.893080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.893120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.893272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.893313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.893533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.893580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.893731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.893788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.893938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.893979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.894143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.894184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.894351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.894393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.894545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.894586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.894748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.894790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.894978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.895172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.895373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.895539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.895701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.895926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.895967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.896152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.896193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.896337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.896542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.896584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.896759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.896801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.896951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.896992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.897196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.897238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.897383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.897424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.897578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.897619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.897774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.897816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.897992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.898034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.898205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.898247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.898370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.898411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.898632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.898674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.898890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.898953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.899103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.899167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.899339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.899380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.899505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.899735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.899788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.899935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.899999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.900173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.900236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.900417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.900476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.900627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.900668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.900855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.424 [2024-07-12 17:14:02.900897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.424 qpair failed and we were unable to recover it. 00:25:03.424 [2024-07-12 17:14:02.901024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.901065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.901244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.901285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.901438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.901479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.901604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.901645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.901833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.901902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.902032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.902074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.902203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.902245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.902418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.902459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.902606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.902648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.902830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.902872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.903098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.903285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.903441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.903643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.903816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.903969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.904010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.904200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.904241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.904365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.904406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.904631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.904672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.904832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.904874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.905047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.905277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.905467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.905628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.905830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.905980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.906022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.906201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.906242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.906390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.906431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.906584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.906626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.906776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.906819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.907066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.907274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.907491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.907663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.907831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.907987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.908029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.908213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.908255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.908405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.908446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.908599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.908640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.425 qpair failed and we were unable to recover it. 00:25:03.425 [2024-07-12 17:14:02.908799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.425 [2024-07-12 17:14:02.908841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.908959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.909001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.909184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.909236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.909393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.909434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.909580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.909621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.909798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.909846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.909997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.910039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.910190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.910232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.910402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.910443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.910594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.910635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.910787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.910829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.911057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.911098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.911244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.911286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.911470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.911511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.911668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.911709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.911892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.912120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.912161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.912377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.912418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.912591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.912632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.912787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.912829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.912984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.913060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.913234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.913275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.913399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.913440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.913603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.913644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.913803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.913846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.913971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.914012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.914186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.914227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.914380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.914566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.914606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.914772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.914814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.914991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.915033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.915201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.915241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.915438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.915480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.915606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.915648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.915803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.915844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.916022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.916063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.916238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.916300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.916430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.916471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.916597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.916638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.916855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.426 [2024-07-12 17:14:02.916897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.426 qpair failed and we were unable to recover it. 00:25:03.426 [2024-07-12 17:14:02.917051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.917092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.917222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.917263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.917428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.917469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.917622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.917663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.917811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.917853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.918915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.918955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.919134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.919174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.919318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.919359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.919512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.919553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.919710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.919761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.919910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.919951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.920071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.920112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.920262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.920303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.920448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.920490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.920639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.920680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.920821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.920863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.921079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.921119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.921268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.921309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.921435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.921476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.921620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.921661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.921821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.921864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.922115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.922280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.922473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.922666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.922848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.922968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.923009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.923223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.923264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.923391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.923432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.427 qpair failed and we were unable to recover it. 00:25:03.427 [2024-07-12 17:14:02.923558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.427 [2024-07-12 17:14:02.923599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.923833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.923875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.924110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.924151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.924341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.924382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.924543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.924585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.924735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.924784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.924946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.924987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.925207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.925248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.925393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.925434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.925594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.925634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.925776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.925818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.926032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.926262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.926331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.926446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.926487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.926660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.926701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.926835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.926877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.927113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.927398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.927462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.927609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.927651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.927835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.927897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.928062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.928131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.928285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.928345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.928472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.928513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.928657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.928698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.928856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.928898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.929034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.929076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.929260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.929302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.929459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.929500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.929648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.929690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.929882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.929924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.930044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.930086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.930239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.930280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.930461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.930502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.930645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.930687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.930900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.930943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.931177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.931218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.931464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.931506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.931635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.931677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.931859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.931937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.932053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.932095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.428 [2024-07-12 17:14:02.932303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.428 [2024-07-12 17:14:02.932362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.428 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.932482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.932523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.932670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.932711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.932920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.932982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.933115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.933183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.933337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.933379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.933529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.933571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.933718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.933772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.933951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.934167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.934208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.934359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.934400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.934548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.934595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.934789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.934832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.934983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.935025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.935174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.935215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.935337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.935379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.935614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.935655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.935826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.935889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.936059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.936127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.936291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.936353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.936475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.936523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.936689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.936730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.936936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.936996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.937147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.937189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.937341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.937382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.937521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.937562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.937735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.937785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.938019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.938061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.938274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.938316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.938439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.938480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.938632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.938673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.938833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.938876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.939062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.939103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.939257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.939325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.939549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.939590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.939834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.939896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.940084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.940125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.940270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.940311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.940496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.940537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.940772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.940813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.429 [2024-07-12 17:14:02.940973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.429 [2024-07-12 17:14:02.941034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.429 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.941179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.941220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.941394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.941434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.941581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.941622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.941754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.941796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.941976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.942143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.942324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.942493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.942690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.942878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.942920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.943066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.943113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.943261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.943303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.943462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.943503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.943672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.943713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.943887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.943928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.944076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.944118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.944297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.944339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.944469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.944510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.944659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.944700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.944865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.944908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.945127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.945314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.945498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.945657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.945827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.945999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.946054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.946207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.946249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.946403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.946445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.946569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.946610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.946783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.946826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.946971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.947191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.947345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.947567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.947749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.947911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.947953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.948075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.948117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.948270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.948312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.948438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.430 [2024-07-12 17:14:02.948480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.430 qpair failed and we were unable to recover it. 00:25:03.430 [2024-07-12 17:14:02.948604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.948645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.948796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.948838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.948972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.949151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.949348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.949505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.949672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.949901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.949943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.950119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.950161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.950283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.950324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.950559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.950601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.950774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.950823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.950989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.951054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.951222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.951304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.951464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.951505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.951632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.951673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.951826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.951868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.952066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.952233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.952424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.952631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.952829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.952981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.953022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.953143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.953184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.953390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.953431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.953585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.953627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.953777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.953818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.953962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.954003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.954173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.954214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.954379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.954420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.954665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.954937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.954997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.955223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.955284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.955435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.955477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.431 [2024-07-12 17:14:02.955634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.431 [2024-07-12 17:14:02.955675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.431 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.955933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.955975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.956140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.956200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.956364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.956424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.956597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.956649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.956899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.956942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.957205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.957246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.957391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.957432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.957653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.957695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.957886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.957946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.958101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.958169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.958380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.958444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.958605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.958646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.958811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.958892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.959037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.959066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.959261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.959328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.959554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.959606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.959764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.959807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.960063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.960105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.960375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.960416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.960576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.960617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.960759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.960801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.960966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.961202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.961264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.961428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.961480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.961795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.961867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.962041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.962101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.962272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.962610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.962651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.962838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.962903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.963095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.963162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.963388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.963429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.963633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.963675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.963845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.963906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.964050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.964114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.964317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.964375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.964582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.964623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.964762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.964804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.432 [2024-07-12 17:14:02.964974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.432 [2024-07-12 17:14:02.965046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.432 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.965263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.965322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.965480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.965531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.965770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.965812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.965996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.966051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.966272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.966334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.966513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.966578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.966735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.966804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.966954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.967324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.967365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.967696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.967752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.968015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.968075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.968243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.968305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.968477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.968544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.968752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.968807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.968983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.969018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.969154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.969188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.969414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.969448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.969691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.969726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.969879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.969913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.970071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.970137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.970304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.970368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.970532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.970571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.970807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.970860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.971004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.971040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.971248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.971289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.971452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.971488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.971688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.971729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.971889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.971926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.972034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.972089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.972283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.972340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.972466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.972508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.972633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.972840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.972907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.973056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.973098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.973272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.973313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.973462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.973504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.973657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.973698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.973875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.973916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.433 qpair failed and we were unable to recover it. 00:25:03.433 [2024-07-12 17:14:02.974092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.433 [2024-07-12 17:14:02.974133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.974284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.974326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.974516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.974557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.974679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.974720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.974858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.974900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.975039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.975080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.975304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.975345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.975485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.975532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.975687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.975728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.975911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.975952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.976178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.976219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.976375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.976418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.976564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.976725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.976794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.976952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.977000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.977211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.977245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.977411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.977585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.977637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.977758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.977792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.977986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.978839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.978982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.979200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.979393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.979620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.979778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.979942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.979974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.980889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.980920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.981056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.981088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.981193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.981226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.981333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.981365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.434 [2024-07-12 17:14:02.981495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.434 [2024-07-12 17:14:02.981526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.434 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.981632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.981664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.981791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.981824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.981958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.981991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.982199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.982244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.982385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.982417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.982555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.982587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.982692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.982725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.982898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.982930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.983132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.983272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.983494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.983708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.983857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.983989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.984198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.984354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.984535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.984723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.984894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.984948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.985202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.985234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.985439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.985475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.985607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.985639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.985790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.985845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.985963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.986200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.986361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.986559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.986756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.986932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.987104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.987139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.987323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.987354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.987511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.987543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.987726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.987765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.987902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.987950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.988182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.988229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.435 qpair failed and we were unable to recover it. 00:25:03.435 [2024-07-12 17:14:02.988404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.435 [2024-07-12 17:14:02.988452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.988617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.988649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.988796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.988830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.989879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.989928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.990885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.990917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.991897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.991928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.992121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.992341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.992503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.992638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.992806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.992968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.993000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.993210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.993268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.993408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.993613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.993645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.993796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.993830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.993993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.994188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.994401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.994560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.994798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.994960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.994992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.995126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.995158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.995324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.995356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.995488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.995520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.995759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.995792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.436 qpair failed and we were unable to recover it. 00:25:03.436 [2024-07-12 17:14:02.995953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.436 [2024-07-12 17:14:02.996002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.996145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.996192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.996334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.996383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.996514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.996546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.996688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.996720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.996883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.996915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.997071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.997103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.997285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.997317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.997457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.997489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.997719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.997760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.997899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.997931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.998921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.998946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:02.999876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:02.999908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.000879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.000911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.001048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.001080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.001281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.001313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.001473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.001505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.001716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.001756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.001872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.001905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.002063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.002110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.002251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.002282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.002460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.002492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.002682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.002867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.002916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.003009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.003046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.003236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.437 [2024-07-12 17:14:03.003291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.437 qpair failed and we were unable to recover it. 00:25:03.437 [2024-07-12 17:14:03.003478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.003514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.003647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.003686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.003834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.003883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.003992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.004157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.004386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.004554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.004723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.004938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.004970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.005113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.005173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.005336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.005368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.005490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.005521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.005626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.005651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.005860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.005893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.006909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.006941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.007913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.007950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.008177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.008351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.008559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.008720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.008884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.008992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.009859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.009991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.010023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.010234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.010281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.010406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.010449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.438 qpair failed and we were unable to recover it. 00:25:03.438 [2024-07-12 17:14:03.010628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.438 [2024-07-12 17:14:03.010660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.010825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.010876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.010988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.011173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.011369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.011512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.011711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.011860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.011893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.012065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.012097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.012302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.012364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.012502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.012534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.012683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.012715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.012851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.012900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.013927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.013977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.014101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.014133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.014273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.014321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.014489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.014521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.014655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.014687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.014828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.014878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.015019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.015067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.015223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.015282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.015465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b65ae0 is same with the state(5) to be set 00:25:03.439 [2024-07-12 17:14:03.015764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.015814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.015994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.016221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.016395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.016581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.016762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.016939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.016972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.017189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.017255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.017433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.017498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.017713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.017753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.017859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.017892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.018104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.018148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.018327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.018373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.018528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.439 [2024-07-12 17:14:03.018581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.439 qpair failed and we were unable to recover it. 00:25:03.439 [2024-07-12 17:14:03.018734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.018802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.018973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.019154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.019360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.019561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.019807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.019954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.019986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.020158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.020191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.020317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.020366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.020466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.020501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.020677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.020712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.020891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.020926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.021088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.021135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.021299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.021343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.021511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.021545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.021694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.021728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.021908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.021951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.022084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.022129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.022286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.022321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.022495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.022556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.022725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.022797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.022943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.022979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.023105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.023138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.023307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.023341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.023499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.023711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.023753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.023939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.023972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.024149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.024184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.024364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.024422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.024621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.024696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.024881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.024917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.025055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.025087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.025191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.025240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.025381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.025416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.440 qpair failed and we were unable to recover it. 00:25:03.440 [2024-07-12 17:14:03.025550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.440 [2024-07-12 17:14:03.025599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.025764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.025815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.025951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.025983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.026120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.026152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.026320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.026501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.026535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.026694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.026726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.026908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.026953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.027096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.027130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.027282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.027315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.027499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.027533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.027702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.027743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.027916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.028109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.028143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.028269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.028330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.028570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.028639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.028912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.029097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.029161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.029409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.029483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.029683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.029760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.029932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.029965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.030181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.030214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.030348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.030412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.030623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.030688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.030884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.030917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.031078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.031345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.031415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.031646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.031905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.031940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.032112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.032178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.032444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.032509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.032700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.032798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.032952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.032987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.033227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.033295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.033537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.033603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.033811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.033843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.033972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.034002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.034185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.034250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.034508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.034572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.034771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.034828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.034982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.035013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.035163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.441 [2024-07-12 17:14:03.035299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.441 [2024-07-12 17:14:03.035330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.441 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.035548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.035613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.035790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.035821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.035954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.035985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.036199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.036271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.036500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.036565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.036764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.036825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.036957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.036988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.037136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.037167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.037296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.037327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.037586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.037651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.037845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.037876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.037984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.038014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.038170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.038242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.038459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.038524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.038720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.038812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.038919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.038954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.039140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.039181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.039341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.039405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.039614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.039680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.039865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.039896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.040008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.040038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.040226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.040301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.040549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.040614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.040806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.040837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.040942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.040972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.041191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.041222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.041326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.041357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.041527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.041592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.041810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.041841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.041947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.041978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.042090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.042121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.042311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.042386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.042658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.042723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.042899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.042929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.043032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.043063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.043211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.043275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.043503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.043568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.043774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.043823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.043926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.043957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.044102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.044132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.044285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.044349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.044560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.044624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.044833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.442 [2024-07-12 17:14:03.044865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.442 qpair failed and we were unable to recover it. 00:25:03.442 [2024-07-12 17:14:03.044972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.045002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.045191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.045255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.045470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.045535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.045808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.045839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.045970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.046138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.046272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.046425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.046711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.046873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.046903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.047940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.047970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.048146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.048176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.048335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.048399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.048630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.048694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.048911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.048942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.049110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.049334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.049394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.049611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.049672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.049856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.049887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.049991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.050022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.050202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.050279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.050475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.050536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.050755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.050786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.050895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.050925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.051066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.051126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.051310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.051340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.051443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.051473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.051627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.051687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.051905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.051936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.052088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.052147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.052355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.052415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.052612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.052679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.052875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.052905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.053047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.053082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.053312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.053376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.053607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.053668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.053850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.443 [2024-07-12 17:14:03.053883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.443 qpair failed and we were unable to recover it. 00:25:03.443 [2024-07-12 17:14:03.054018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.054056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.054258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.054319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.054531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.054601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.054801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.054832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.054938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.054969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.055103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.055283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.055508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.055731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.055898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.055998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.056068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.056301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.056332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.056500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.056560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.056760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.056827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.056930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.056961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.057088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.057119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.057311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.057372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.057528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.057788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.057820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.057927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.057957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.058050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.058080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.058244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.058304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.058520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.058580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.058791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.058822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.058936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.058967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.059155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.059196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.059337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.059396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.059560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.059620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.059782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.059814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.059944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.059974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.060145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.060206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.060413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.060473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.060668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.060724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.060874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.060904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.061010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.061040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.061145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.061175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.061364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.061420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.061639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.061697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.061875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.444 [2024-07-12 17:14:03.061906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.444 qpair failed and we were unable to recover it. 00:25:03.444 [2024-07-12 17:14:03.062038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.062095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.062293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.062347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.062533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.062589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.062780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.062830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.062931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.062962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.063073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.063103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.063260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.063315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.063526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.063582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.063780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.063812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.063915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.063945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.064078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.064111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.064260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.064324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.064482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.064537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.064702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.064733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.065150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.065223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.065443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.065599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.065647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.065803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.065835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.065937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.065967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.066124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.066260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.066447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.066647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.066891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.066994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.067044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.445 [2024-07-12 17:14:03.067247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.445 [2024-07-12 17:14:03.067278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.445 qpair failed and we were unable to recover it. 00:25:03.723 [2024-07-12 17:14:03.068231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.068263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.068407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.068445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.068599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.068635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.068780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.068809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.068910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.068938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.069939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.069981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.070913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.070943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.071960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.071991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.072231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.072428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.072557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.072732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.072875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.072975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.073002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.073111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.073139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.073272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.073301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.074849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.074888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.075000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.724 [2024-07-12 17:14:03.075029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.724 qpair failed and we were unable to recover it. 00:25:03.724 [2024-07-12 17:14:03.075148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.075212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.075432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.075496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.075699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.075799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.075912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.075942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.076077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.076106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.076279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.076342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.076552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.076616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.076815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.076842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.076936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.076961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.077946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.077972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.078092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.078117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.078216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.078242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.078362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.078388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.078538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.078601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.078792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.078819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.079716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.079758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.079865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.079893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.079989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.080114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.080251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.080567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.080823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.080951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.080978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.081946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.081974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.082874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.082901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.725 qpair failed and we were unable to recover it. 00:25:03.725 [2024-07-12 17:14:03.083000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.725 [2024-07-12 17:14:03.083026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.083867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.083979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.084956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.084983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.085953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.085980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.086933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.086959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.087168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.087212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.087347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.087376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.087626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.087653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.087772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.087812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.087915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.087945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.088896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.088922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.089048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.089076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.089199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.089241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.089340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.089371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.089505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.726 [2024-07-12 17:14:03.089536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-12 17:14:03.089695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.089721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.089846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.089872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.089996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.090168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.090321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.090475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.090736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.090912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.090938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.091129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.091167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.091334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.091362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.091483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.091698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.091731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.091894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.091921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.092002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.092028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.092162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.092190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.092362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.092425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.092622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.092893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.092920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.093014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.093040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.093161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.093186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.093385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.093440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.093682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.094176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.094431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.094487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.094667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.094694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.094828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.094855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.094980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.095132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.095377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.095502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.095771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.095909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.095935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.096894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.096920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.097089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.097129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.097278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.097321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.097534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.097593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-12 17:14:03.097814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.727 [2024-07-12 17:14:03.097841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.097945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.097972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.098182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.098209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.098381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.098432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.098752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.098809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.098937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.098963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.099953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.099979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.100134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.100163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.100322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.100376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.100575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.100613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.100728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.100789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.100916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.100942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.101939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.101965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.102110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.102152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.102374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.102427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.102592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.102635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.102796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.102823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.102972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.102998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.103193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.103254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.103384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.103439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-12 17:14:03.103605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.728 [2024-07-12 17:14:03.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.103804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.103833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.104044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.104074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.104240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.104301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.104568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.104601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.104863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.104891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.105072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.105128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.105338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.105371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.105479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.105516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.105681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.105713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.105886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.105914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.106121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.106181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.106368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.106400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.106587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.106627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.106812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.106840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.107000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.107028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.107228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.107289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.107401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.107433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.107638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.107682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.107888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.107928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.108081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.108133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.108332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.108383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.108553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.108586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.108734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.108823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.108983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.109021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.109283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.109337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.109501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.109542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.109766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.109826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.109925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.109953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.110169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.110219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.110374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.110430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.110573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.110845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.110873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.110973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.111001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.111146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.729 [2024-07-12 17:14:03.111204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.729 qpair failed and we were unable to recover it. 00:25:03.729 [2024-07-12 17:14:03.111346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.111384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.111572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.111604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.111753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.111798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.112008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.112226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.112280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.112428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.112461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.112660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.112693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.112873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.112901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.113019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.113064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.113208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.113266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.113415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.113447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.113647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.113688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.113853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.113886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.114959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.114987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.115097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.115139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.115330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.115362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.115550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.115582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.115728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.115768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.115950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.115978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.116192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.116247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.116436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.116475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.116615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.116654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.116877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.116906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.117071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.117261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.117441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.117622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.117857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.117986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.118014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.118140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.118182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.118361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.118393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.118596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.118636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.118832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.118861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.118983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.119150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.119183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.730 [2024-07-12 17:14:03.119342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.730 [2024-07-12 17:14:03.119374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.730 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.119532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.119564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.119797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.119826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.119979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.120007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.120169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.120213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.120393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.120425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.120594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.120626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.120755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.120787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.120977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.121239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.121384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.121437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.121610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.121643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.121791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.121819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.122077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.122131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.122318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.122373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.122534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.122575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.122717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.122757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.122931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.122959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.123144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.123203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.123399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.123458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.123636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.123668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.123851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.123879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.124046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.124079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.124294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.124346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.124489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.124521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.124724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.124784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.124931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.124958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.125132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.125199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.125317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.125386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.125553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.125587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.125717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.125811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.125978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.126014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.126159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.731 [2024-07-12 17:14:03.126192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.731 qpair failed and we were unable to recover it. 00:25:03.731 [2024-07-12 17:14:03.126356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.126388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.126571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.126603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.126826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.126855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.127098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.127273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.127480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.127658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.127845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.127985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.128013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.128207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.128261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.128517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.128550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.128745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.128791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.128911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.128939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.129167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.129221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.129336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.129398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.129541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.129572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.129772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.129822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.129937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.129964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.130204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.130263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.130469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.130523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.130663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.130700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.130903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.130931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.131156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.131301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.131501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.131632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.131831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.131995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.132049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.132188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.132240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.132404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.132447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.132586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.132628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.132857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.132883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.133081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.133279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.133472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.133643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.133844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.133996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.134041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.732 [2024-07-12 17:14:03.134151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.732 [2024-07-12 17:14:03.134197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.732 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.134326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.134355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.134479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.134511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.134653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.134685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.134830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.134859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.134982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.135950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.135978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.136846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.136874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.137912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.137940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.138042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.138073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.138201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.138233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.138359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.138404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.138550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.733 [2024-07-12 17:14:03.138582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.733 qpair failed and we were unable to recover it. 00:25:03.733 [2024-07-12 17:14:03.138727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.138791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.138914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.139883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.139986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.140157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.140315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.140515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.140700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.140866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.140894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.141825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.141973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.142919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.142947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.143941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.143969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.144891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.734 [2024-07-12 17:14:03.144919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.734 qpair failed and we were unable to recover it. 00:25:03.734 [2024-07-12 17:14:03.145053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.145190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.145382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.145543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.145765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.145902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.146849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.146881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.147868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.147901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.148901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.148935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.149916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.149947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.150105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.150135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.150261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.735 [2024-07-12 17:14:03.150292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.735 qpair failed and we were unable to recover it. 00:25:03.735 [2024-07-12 17:14:03.150426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.150457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.150560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.150590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.150723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.150779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.150911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.150942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.151931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.151960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.152893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.152922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.153965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.153994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.154918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.154948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.155161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.155344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.155507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.155673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.155846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.155971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.156001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.156205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.156262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.156411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.736 [2024-07-12 17:14:03.156465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.736 qpair failed and we were unable to recover it. 00:25:03.736 [2024-07-12 17:14:03.156601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.156634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.156754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.156800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.156931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.156961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.157152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.157223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.157343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.157375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.157509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.157541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.157677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.157709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.157887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.157933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.158818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.158850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.159947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.159977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.160182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.160557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.160694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.160869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.160988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.161825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.161979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.162008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.737 [2024-07-12 17:14:03.162151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.737 [2024-07-12 17:14:03.162188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.737 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.162327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.162427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.162459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.162618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.162651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.162780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.162810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.162968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.162997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.163165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.163355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.163387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.163545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.163577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.163780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.163811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.163933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.163963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.164194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.164356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.164515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.164685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.164867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.164994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.165139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.165338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.165500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.165692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.165879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.165910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.166891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.166921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.167070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.167102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.738 [2024-07-12 17:14:03.167235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.738 [2024-07-12 17:14:03.167267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.738 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.167368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.167400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.167564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.167596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.167703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.167735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.167873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.167903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.168843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.168874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.169876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.169907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.170894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.170999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.171960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.171990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.172927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.172957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.173069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.173101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.173244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.173276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.739 qpair failed and we were unable to recover it. 00:25:03.739 [2024-07-12 17:14:03.173381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.739 [2024-07-12 17:14:03.173414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.173554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.173586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.173752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.173799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.173930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.173959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.174904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.174934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.175910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.175943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.176900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.176933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.177887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.177920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.178906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.178939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.740 [2024-07-12 17:14:03.179049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.740 [2024-07-12 17:14:03.179082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.740 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.179966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.179998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.180164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.180305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.180491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.180661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.180827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.180988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.181888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.181920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.182912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.182944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.183917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.183949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.184087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.184119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.184273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.184306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.184413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.184445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.184609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.741 qpair failed and we were unable to recover it. 00:25:03.741 [2024-07-12 17:14:03.184770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.741 [2024-07-12 17:14:03.184804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.184940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.184972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.185101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.185162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.185300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.185332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.185493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.185525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.185664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.185696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.185846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.185909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.186827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.186860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.187916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.187949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.188887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.188924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.189950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.189983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.190146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.190334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.742 [2024-07-12 17:14:03.190366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.742 qpair failed and we were unable to recover it. 00:25:03.742 [2024-07-12 17:14:03.190495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.190527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.190630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.190661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.190802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.190974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.191904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.191936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.192953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.192985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.193859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.193892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.194890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.194922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.195923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.195954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.743 [2024-07-12 17:14:03.196867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.743 [2024-07-12 17:14:03.196899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.743 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.197962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.197994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.198935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.198968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.199865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.199897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.200884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.200940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.201114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.201344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.201512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.201681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.201870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.201975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.202147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.202337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.202505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.202680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.202881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.202914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.203083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.203115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.203234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.203266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.203399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.203431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.203572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.744 [2024-07-12 17:14:03.203604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.744 qpair failed and we were unable to recover it. 00:25:03.744 [2024-07-12 17:14:03.203715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.203756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.203865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.203897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.204832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.204865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.205915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.205968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.206123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.206175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.206288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.206320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.206474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.206507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.206636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.206672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.206819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.206880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.207961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.207993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.208149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.208181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.208338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.208370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.208478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.208510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.208669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.208702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.208869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.208901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.209104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.209329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.209496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.209689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.209835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.209955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.210017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.210142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.210201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.210335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.210367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.210494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.745 [2024-07-12 17:14:03.210526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.745 qpair failed and we were unable to recover it. 00:25:03.745 [2024-07-12 17:14:03.210685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.210717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.210862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.210894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.211889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.211922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.212860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.212893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.213859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.213892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.214890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.214922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.215845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.215878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.216866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.216899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.217054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.217086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.217217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.746 [2024-07-12 17:14:03.217249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.746 qpair failed and we were unable to recover it. 00:25:03.746 [2024-07-12 17:14:03.217381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.217413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.217554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.217586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.217719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.217761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.217896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.217928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.218913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.218946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.219119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.219286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.219446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.219646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.219814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.219975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.220819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.220980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.221160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.221342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.221529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.221660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.221840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.221873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.222010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.222042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.222175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.222207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.222394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.222426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.222555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.222839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.222872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.223008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.223045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.223146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.223178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.223320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.747 [2024-07-12 17:14:03.223352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.747 qpair failed and we were unable to recover it. 00:25:03.747 [2024-07-12 17:14:03.223461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.223494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.223701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.223733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.223896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.223929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.224129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.224161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.224281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.224341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.224466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.224498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.224663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.224695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.224847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.224902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.225938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.225970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.226101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.226265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.226504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.226703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.226891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.226998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.227030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.227181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.227213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.227376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.227408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.227585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.227621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.227735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.227777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.227947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.228004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.228156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.228208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.228386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.228422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.228589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.228620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.228834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.228890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.229958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.229990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.230174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.230207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.230331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.230363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.230471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.230503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.230666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.230698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.748 qpair failed and we were unable to recover it. 00:25:03.748 [2024-07-12 17:14:03.230869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.748 [2024-07-12 17:14:03.230905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.231072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.231269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.231500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.231676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.231836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.231978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.232040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.232195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.232227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.232353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.232385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.232540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.232573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.232800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.232837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.233002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.233034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.233210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.233247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.233385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.233417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.233607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.233638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.233823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.233881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.234116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.234171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.234320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.234373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.234525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.234557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.234722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.234762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.234911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.234944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.235116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.235148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.235347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.235492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.235529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.235693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.235725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.235839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.235872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.236026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.236087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.236322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.236373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.236563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.236605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.236796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.236860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.237009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.237072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.237194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.237255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.237496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.237528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.237680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.237713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.237881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.237932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.238145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.238184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.238324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.238356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.238521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.238563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.238697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.238730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.238918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.238951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.749 [2024-07-12 17:14:03.239099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.749 [2024-07-12 17:14:03.239131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.749 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.239263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.239295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.239445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.239477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.239681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.239713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.239895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.239927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.240052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.240084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.240263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.240295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.240444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.240487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.240628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.240660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.240796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.240861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.241089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.241145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.241287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.241339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.241505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.241537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.241680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.241712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.241915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.241975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.242102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.242171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.242314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.242366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.242566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.242603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.242770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.242803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.242926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.242988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.243142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.243197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.243337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.243369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.243560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.243592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.243753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.243800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.243986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.244225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.244382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.244567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.244760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.244891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.244935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.245120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.245180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.245323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.245354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.245496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.245528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.245668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.245700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.245849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.245882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.246100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.246136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.246247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.246279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.246450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.246491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.246654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.246686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.246856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.246889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.247218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.750 [2024-07-12 17:14:03.247251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.750 qpair failed and we were unable to recover it. 00:25:03.750 [2024-07-12 17:14:03.247415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.247447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.247620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.247664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.247823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.247887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.248060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.248121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.248386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.248442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.248607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.248640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.248802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.248863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.248989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.249049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.249209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.249267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.249460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.249497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.249642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.249679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.249838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.249891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.250099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.250156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.250310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.250364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.250493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.250525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.250746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.250790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.250957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.250989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.251120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.251192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.251356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.251398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.251571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.251603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.251761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.251802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.252233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.252396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.252530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.252699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.252876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.252931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.253257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.253309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.253504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.253537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.253653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.253685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.253903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.253956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.254103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.254159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.254344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.254396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.254539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.751 [2024-07-12 17:14:03.254571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.751 qpair failed and we were unable to recover it. 00:25:03.751 [2024-07-12 17:14:03.254782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.254826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.254972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.255856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.255976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.256008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.256169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.256201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.256425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.256461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.256595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.256627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.256808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.256842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.256994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.257026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.257168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.257201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.257373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.257405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.257616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.257659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.257826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.257859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.258051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.258116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.258235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.258295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.258462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.258502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.258718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.258772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.258918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.258973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.259127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.259180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.259380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.259439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.259579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.259612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.259800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.259980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.260012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.260198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.260230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.260430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.260477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.260688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.260724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.260927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.260989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.261106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.261167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.261379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.261432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.261598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.261640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.261829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.261880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.262089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.262141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.262355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.262409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.262551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.262583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.262720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.262789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.262959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.263021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.752 [2024-07-12 17:14:03.263168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.752 [2024-07-12 17:14:03.263219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.752 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.263405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.263437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.263583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.263624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.263792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.263825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.263998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.264030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.264288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.264325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.264462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.264495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.264674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.264706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.264875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.264941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.265160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.265192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.265443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.265559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.265591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.265707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.265749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.265955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.266007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.266161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.266214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.266375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.266438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.266627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.266664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.266816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.266870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.267015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.267075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.267231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.267268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.267403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.267436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.267650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.267683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.267933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.267978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.268219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.268486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.268552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.268781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.268815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.268919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.268952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.269096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.269129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.269286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.269371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.269623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.269685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.269987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.270018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.270231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.270292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.270614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.270676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.270958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.270990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.271148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.271209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.271458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.271519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.271721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.271763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.271924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.271957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.272080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.272144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.272346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.272410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.272635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.753 [2024-07-12 17:14:03.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.753 qpair failed and we were unable to recover it. 00:25:03.753 [2024-07-12 17:14:03.272884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.272918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.273059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.273092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.273248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.273312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.273505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.273570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.273786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.273820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.273997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.274075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.274243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.274309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.274570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.274931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.274965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.275917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.275945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.276912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.276938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.277877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.277903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.278038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.278064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.278212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.278253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.278383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.278460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.278723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.278813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.279027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.279054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.279197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.279262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.279506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.279570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.279866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.279894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.280038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.280063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.280325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.280390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.280552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.280625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.754 [2024-07-12 17:14:03.280818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.754 [2024-07-12 17:14:03.280845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.754 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.281026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.281051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.281241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.281267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.281413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.281439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.281638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.281702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.281958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.281984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.282154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.282229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.282473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.282538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.282770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.282796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.282943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.282969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.283149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.283219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.283488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.283777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.283821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.283961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.283988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.284112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.284162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.284329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.284394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.284613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.284678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.284949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.284975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.285081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.285147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.285397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.285462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.285700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.285780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.285994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.286021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.286250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.286315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.286538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.286602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.286791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.286817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.286967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.286992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.287129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.287155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.287268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.287297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.287483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.287548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.287781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.287808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.287954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.287980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.288146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.288211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.288478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.288542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.288749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.288805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.288957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.755 [2024-07-12 17:14:03.289248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.755 [2024-07-12 17:14:03.289274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.755 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.289410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.289462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.289652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.289717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.289916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.289941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.290028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.290053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.290190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.290254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.290549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.290614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.290837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.290864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.291054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.291118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.291349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.291414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.291622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.291687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.291910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.291937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.292045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.292072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.292212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.292266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.292463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.292527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.292754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.292794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.292924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.292949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.293096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.293161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.293399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.293432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.293623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.293689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.293889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.293954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.294164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.294199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.294408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.294473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.294703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.294785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.294978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.295013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.295235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.295300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.295535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.295600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.295865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.296072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.296137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.296323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.296387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.296683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.296718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.296904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.296969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.297257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.297331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.297542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.297578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.297693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.297729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.297924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.297989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.298211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.298246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.298389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.298457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.298698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.298774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.299026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.299062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.299250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.756 [2024-07-12 17:14:03.299315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.756 qpair failed and we were unable to recover it. 00:25:03.756 [2024-07-12 17:14:03.299493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.299559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.299831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.299870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.300080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.300145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.300405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.300469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.300712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.300812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.301010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.301082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.301342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.301408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.301648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.301713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.301907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.301944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.302118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.302183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.302381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.302420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.302633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.302697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.302977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.303041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.303267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.303306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.303468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.303532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.303815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.303882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.304185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.304227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.304437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.304501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.304762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.304828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.305038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.305080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.305222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.305297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.305515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.305579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.305840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.305882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.306084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.306149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.306324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.306389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.306619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.306684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.306944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.306988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.307220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.307584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.307628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.307885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.307952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.308179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.308477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.308526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.308808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.308854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.309025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.309103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.309352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.309395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.309651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.309721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.309997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.310062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.310325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.310371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.310580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.310644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.310889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.310955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.311216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.757 [2024-07-12 17:14:03.311262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.757 qpair failed and we were unable to recover it. 00:25:03.757 [2024-07-12 17:14:03.311503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.311567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.311844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.311910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.312146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.312192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.312427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.312491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.312800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.312868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.313213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.313264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.313524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.313589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.313818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.314126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.314175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.314413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.314478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.314778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.314843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.315051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.315099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.315286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.315351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.315618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.315682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.315933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.315983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.316189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.316253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.316524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.316588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.316900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.316955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.317228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.317293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.317575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.317641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.317923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.317977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.318285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.318361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.318599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.318665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.318960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.319013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.319282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.319347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.319660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.319726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.320060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.320116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.320402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.320467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.320798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.320864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.321152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.321208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.321480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.321544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.321791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.321859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.322141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.322198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.322438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.322502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.322725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.322805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.323108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.323164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.323444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.323510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.323766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.323833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.324143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.324203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.324490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.758 [2024-07-12 17:14:03.324554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.758 qpair failed and we were unable to recover it. 00:25:03.758 [2024-07-12 17:14:03.324802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.324869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.325123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.325182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.325471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.325822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.325896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.326191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.326250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.326542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.326607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.326843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.326910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.327191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.327252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.327456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.327521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.327803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.327869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.328165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.328230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.328509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.328574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.328858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.328925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.329200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.329265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.329553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.329618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.329873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.329940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.330220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.330285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.330561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.330637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.330939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.331006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.331306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.331371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.331694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.331774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.332022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.332088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.332367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.332432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.332711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.332808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.333091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.333157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.333398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.333463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.333645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.333710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.334009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.334076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.334362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.334427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.334708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.334792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.335084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.335149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.335421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.335487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.335716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.335801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.336074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.336139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.336414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.336479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.336777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.336845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.337158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.337223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.337517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.337582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.337879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.337946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.338237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.338302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.759 [2024-07-12 17:14:03.338592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.759 [2024-07-12 17:14:03.338658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.759 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.338932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.338999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.339245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.339311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.339595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.339661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.339979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.340046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.340337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.340403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.340686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.340785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.341073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.341138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.341431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.341496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.341827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.341895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.342206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.342271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.342535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.342600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.342906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.342973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.343223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.343287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.343543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.343609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.343845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.343911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.344157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.344223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.344473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.344547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.344839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.344905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.345158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.345223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.345490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.345556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.345821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.345888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.346144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.346209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.346491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.346558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.346848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.346914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.347203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.347268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.347558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.347624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.347914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.347980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.348263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.348327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.348574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.348639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.348903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.348970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.349230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.349295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.349529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.349594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.349844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.349912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.350203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.350268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.350560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.760 [2024-07-12 17:14:03.350625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.760 qpair failed and we were unable to recover it. 00:25:03.760 [2024-07-12 17:14:03.350922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.350988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.351250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.351316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.351616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.351681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.351999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.352065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.352372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.352437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.352801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.352868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.353173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.353239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.353551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.353615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.353879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.353946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.354239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.354304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.354563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.354628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.354856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.354923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.355225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.355291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.355554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.355619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.355822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.355890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.356185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.356251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.356495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.356560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.356849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.356916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.357189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.357254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.357545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.357610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.357860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.357927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.358165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.358239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.358537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.358603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.358857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.358924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.359226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.359290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.359571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.359636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.359919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.359987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.360291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.360682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.360781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.360992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.361059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.361377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.361443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.361715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.361807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.362108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.362173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.362487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.362552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.362845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.363175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.363240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.363470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.363534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.363830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.363897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.364203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.364268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.364551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.364616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.761 [2024-07-12 17:14:03.364930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.761 [2024-07-12 17:14:03.364998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.761 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.365329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.365394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.365700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.365780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.366083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.366148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.366434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.366498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.366719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.366802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.367059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.367123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.367391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.367457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.367687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.367767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.368089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.368154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.368468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.368533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.368845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.368912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.369224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.369289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.369558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.369623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.369922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.369989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.370285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.370350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.370609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.370675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.370951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.371018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.371330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.371395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.371710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.371789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.372089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.372154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.372450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.372524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.372833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.372900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.373165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.373231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.373532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.373596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.373853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.373920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.374235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.374301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.374601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.374666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.374980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.375047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.375354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.375419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.375716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.375797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.376063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.376128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.376389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.376454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.376713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.376816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.377124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.377189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.377519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.377584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.377890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.377959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.378270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.378334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.378645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.378986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.379050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.379351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.762 [2024-07-12 17:14:03.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.762 qpair failed and we were unable to recover it. 00:25:03.762 [2024-07-12 17:14:03.379688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.379765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.380070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.380135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.380445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.380510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.380826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.380893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.381165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.381230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.381497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.381563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.381863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.381932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.382196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.382261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.382564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.382630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.382945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.383013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.383283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.383349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.383654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.383719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.384017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.384083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.384329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.384395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.384628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.384694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.385001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.385068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.385377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.385443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.385706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.385794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.386111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.386178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.386422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.386487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.386758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.386839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.387112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.387178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.387434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.387500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.387771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.387840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.388104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.388170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.388469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.388535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.388830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.388898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.389208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.389273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.389518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.389583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.389824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.389891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.390162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.390227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.390495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.390560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.390819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.390886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.391086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.391151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.391433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.391498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.391797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.391863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.392139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.392206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.392522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.392587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.392904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.392976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.393243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.393309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.393608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.393675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.394000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.763 [2024-07-12 17:14:03.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.763 qpair failed and we were unable to recover it. 00:25:03.763 [2024-07-12 17:14:03.394265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.394331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.394656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.395024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.395061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.395338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.395620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.395685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.395932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.395974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.396186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.396251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.396511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.396578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.396852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.396889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:03.764 [2024-07-12 17:14:03.397108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.764 [2024-07-12 17:14:03.397174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:03.764 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.397437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.397501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.397821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.397860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.398115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.398175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.398431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.398497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.398821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.398858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.399030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.399088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.399363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.399433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.399781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.399821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.400081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.400156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.400487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.400559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.400869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.400907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.401086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.401158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.401412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.401479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.401799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.401837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.402065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.402135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.402463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.402535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.040 [2024-07-12 17:14:03.402842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.040 [2024-07-12 17:14:03.402880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.040 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.403091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.403158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.403429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.403494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.403770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.403834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.404065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.404131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.404404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.404469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.404759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.404817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.405045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.405112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.405399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.405464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.405798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.405836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.406107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.406173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.406476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.406541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.406841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.406878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.407102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.407168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.407472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.407537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.407786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.407843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.408101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.408168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.408473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.408539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.408840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.408878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.409102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.409167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.409447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.409513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.409791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.409829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.410063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.410128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.410394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.410459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.410719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.411012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.411074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.411381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.411446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.411754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.411822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.412026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.412104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.412364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.412430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.412711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.041 [2024-07-12 17:14:03.412803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.041 qpair failed and we were unable to recover it. 00:25:04.041 [2024-07-12 17:14:03.413074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.413140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.413443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.413517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.413809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.413847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.414006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.414343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.414407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.414714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.414805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.415064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.415129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.415381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.415445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.415667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.415733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.416023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.416088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.416297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.416362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.416624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.416690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.416941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.417253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.417319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.417619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.417684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.418019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.418085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.418363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.418428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.418691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.418774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.419032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.419098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.419391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.419457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.419793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.420058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.420123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.420390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.420456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.420708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.420793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.421070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.421136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.421441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.421506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.421774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.421864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.422177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.422242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.422560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.422626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.422889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.422957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.423232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.423298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.423568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.423632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.423958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.424025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.424330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.424396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.424625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.424690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.424966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.425032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.425335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.425400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.425682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.042 [2024-07-12 17:14:03.425768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.042 qpair failed and we were unable to recover it. 00:25:04.042 [2024-07-12 17:14:03.426092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.426158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.426453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.426518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.426824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.426892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.427198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.427480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.427544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.427831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.427897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.428205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.428269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.428584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.428648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.428926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.428993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.429295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.429359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.429621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.429686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.429958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.430024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.430221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.430285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.430545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.430609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.430872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.431218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.431283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.431576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.431641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.431971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.432037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.432334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.432400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.432647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.432712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.433001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.433066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.433398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.433625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.433690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.433936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.434002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.434278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.434343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.434645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.434710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.435007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.435072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.435343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.435409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.435710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.435795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.436027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.436092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.436341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.436407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.436674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.436757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.437022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.437088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.437386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.437451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.437762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.437828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.438135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.438200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.438498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.438564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.438847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.438914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.043 [2024-07-12 17:14:03.439219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.043 [2024-07-12 17:14:03.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.043 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.439587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.439652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.439960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.440028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.440303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.440368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.440599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.440663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.440979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.441055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.441367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.441431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.441664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.441729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.442017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.442083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.442396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.442461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.442765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.442831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.443088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.443153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.443462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.443527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.443793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.443860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.444121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.444185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.444461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.444526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.444820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.444887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.445163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.445228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.445527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.445592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.445907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.445974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.446240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.446305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.446561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.446626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.446949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.447015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.447275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.447339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.447646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.447711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.448048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.448114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.448367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.448431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.448646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.448709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.449034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.449099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.449365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.449430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.449735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.449821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.450117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.450182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.450492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.450557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.450884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.450951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.451247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.451282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.451484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.451519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.451713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.451799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.452041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.452076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.452277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.452312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.452518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.452552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.044 [2024-07-12 17:14:03.452732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.044 [2024-07-12 17:14:03.452775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.044 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.452983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.453019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.453151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.453184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.453415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.453450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.453655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.453689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.453911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.453954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.454169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.454204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.454426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.454491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.454784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.454819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.455071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.455107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.455308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.455345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.455525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.455562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.455772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.455824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.456044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.456079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.456244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.456279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.456479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.456514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.456754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.456790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.457029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.457064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.457294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.457329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.457537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.457572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.457773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.457808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.458013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.458048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.458240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.458275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.458517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.458552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.458812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.458846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.459083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.459117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.459307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.459344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.459550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.459583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.459821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.459855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.460051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.460088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.460323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.460356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.460564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.460598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.460849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.460884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.045 [2024-07-12 17:14:03.461124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.045 [2024-07-12 17:14:03.461158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.045 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.461404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.461437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.461638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.461672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.461934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.461967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.462150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.462183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.462372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.462404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.462575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.462608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.462802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.462835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.462990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.463021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.463217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.463249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.463396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.463429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.463674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.463706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.463872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.463910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.464099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.464132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.464368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.464400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.464596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.464628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.464807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.464840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.465038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.465070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.465301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.465333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.465579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.465611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.465808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.465839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.465984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.466015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.466242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.466273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.466508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.466539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.466780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.466811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.467035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.467067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.467256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.467284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.467512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.467543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.467757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.467789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.467932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.467962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.468105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.468135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.468269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.468298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.468472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.468503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.468732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.468773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.468975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.469005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.469209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.469239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.469423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.469453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.469632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.469660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.469891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.469922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.470117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.470148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.046 [2024-07-12 17:14:03.470368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.046 [2024-07-12 17:14:03.470398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.046 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.470631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.470661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.470818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.470848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.470981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.471009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.471232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.471262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.471455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.471485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.471718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.471756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.472006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.472036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.472204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.472233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.472404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.472432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.472626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.472655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.472888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.472917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.473111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.473144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.473365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.473637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.473666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.473858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.473888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.474070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.474099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.474229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.474257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.474488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.474517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.474709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.474744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.474973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.475002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.475182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.475209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.475377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.475407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.475562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.475591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.475816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.475844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.476068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.476096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.476329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.476584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.476612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.476840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.476869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.477021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.477049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.477279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.477307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.477482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.477508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.477692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.477720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.477951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.477980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.478112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.478138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.478327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.478355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.478557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.478585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.478783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.478812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.479040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.479068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.479299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.479327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.047 [2024-07-12 17:14:03.479560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.047 [2024-07-12 17:14:03.479588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.047 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.479728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.479762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.480011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.480038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.480267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.480294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.480519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.480546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.480776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.480803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.480966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.480992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.481204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.481231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.481419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.481445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.481609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.481636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.481775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.481800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.481979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.482006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.482233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.482260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.482441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.482467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.482660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.482686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.482920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.482948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.483099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.483123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.483307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.483333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.483560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.483586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.483713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.483744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.483965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.483991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.484208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.484234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.484466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.484492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.484648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.484674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.484896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.484922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.485080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.485107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.485331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.485358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.485535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.485560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.485761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.485789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.485975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.486222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.486248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.486433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.486459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.486647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.486673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.486859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.486886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.487122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.487148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.487338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.487363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.487583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.487609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.487838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.487864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.488043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.488068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.048 [2024-07-12 17:14:03.488251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.048 [2024-07-12 17:14:03.488281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.048 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.488405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.488428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.488652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.488677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.488876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.488903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.489120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.489145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.489284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.489308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.489484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.489509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.489751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.489779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.489981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.490008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.490170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.490195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.490384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.490409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.490592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.490618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.490807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.490832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.491043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.491068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.491312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.491337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.491538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.491563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.491828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.491855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.492089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.492128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.492337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.492506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.492530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.492776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.492802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.492978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.493003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.493200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.493223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.493456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.493481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.493677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.493700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.493904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.493931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.494169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.494193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.494438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.494463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.494707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.494754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.494952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.494976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.495212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.495237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.495506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.495579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.495864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.495892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.496123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.496199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.496487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.496561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.049 qpair failed and we were unable to recover it. 00:25:04.049 [2024-07-12 17:14:03.496826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.049 [2024-07-12 17:14:03.496852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.496980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.497005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.497193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.497267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.497522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.497577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.497870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.497897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.498145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.498228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.498520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.498594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.498818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.499107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.499180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.499475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.499548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.499731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.499802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.500052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.500129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.500387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.500459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.500709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.500791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.501049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.501125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.501373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.501446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.501728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.501800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.502053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.502078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.502326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.502400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.502694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.502790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.503023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.503078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.503374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.503447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.503759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.503806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.504030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.504055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.504250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.504305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.504599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.504673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.504877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.504901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.505116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.505190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.505486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.505559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.505804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.505831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.506015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.506055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.506216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.506291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.506582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.506638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.506934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.506961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.507145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.507220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.507457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.507530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.507824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.507850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.508053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.508078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.508254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.508278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.508462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.508487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.050 qpair failed and we were unable to recover it. 00:25:04.050 [2024-07-12 17:14:03.508744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.050 [2024-07-12 17:14:03.508770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.508952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.508978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.509246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.509318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.509615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.509689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.509907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.509934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.510152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.510233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.510491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.510563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.510827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.510853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.511064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.511141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.511452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.511526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.511814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.512155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.512227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.512529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.512604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.512893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.512951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.513164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.513236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.513479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.513552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.513844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.513920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.514158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.514231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.514530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.514604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.514901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.514977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.515241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.515314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.515550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.515606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.515865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.515938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.516232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.516306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.516519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.516575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.516782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.516845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.517043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.517071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.517282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.517355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.517650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.517706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.518006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.518036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.518242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.518314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.518614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.518670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.518988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.519018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.519190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.519261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.519547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.519621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.519886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.519917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.520154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.520227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.520525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.520598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.520840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.520871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.051 [2024-07-12 17:14:03.521057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.051 [2024-07-12 17:14:03.521135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.051 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.521421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.521495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.521794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.521824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.521973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.522002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.522296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.522368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.522605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.522661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.522958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.522993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.523302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.523374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.523654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.523709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.523981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.524011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.524323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.524396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.524642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.524698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.525004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.525066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.525307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.525380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.525676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.525985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.526015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.526330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.526404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.526693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.526780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.527066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.527096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.527300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.527330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.527566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.527596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.527788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.527826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.528092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.528134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.528353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.528397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.528658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.528720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.528956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.528987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.529235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.529272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.529529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.529566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.529819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.529851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.530090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.530120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.530363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.530404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.530721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.530823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.531061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.531096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.531305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.531341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.531550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.531586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.531839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.531870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.532041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.532076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.532280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.532316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.532565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.532601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.532858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.532889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.052 [2024-07-12 17:14:03.533123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.052 [2024-07-12 17:14:03.533159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.052 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.533368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.533403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.533637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.533672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.533818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.533847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.534052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.534082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.534310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.534345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.534575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.534616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.534793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.534822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.535032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.535068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.535313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.535348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.535548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.535584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.535792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.535824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.536007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.536053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.536240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.536274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.536494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.536528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.536766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.536813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.537046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.537076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.537265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.537299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.537497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.537531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.537791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.537821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.538030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.538064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.538251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.538285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.538522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.538556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.538754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.538803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.538993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.539039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.539271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.539304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.539545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.539578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.539822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.539853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.540047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.540078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.540315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.540348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.540591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.540623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.540860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.540891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.541092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.541124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.541360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.541391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.541621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.541652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.541851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.541882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.542078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.053 [2024-07-12 17:14:03.542110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.053 qpair failed and we were unable to recover it. 00:25:04.053 [2024-07-12 17:14:03.542341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.542372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.542604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.542636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.542825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.542857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.543062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.543093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.543277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.543309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.543535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.543566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.543813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.543844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.544047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.544078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.544243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.544272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.544469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.544505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.544755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.544802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.544944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.544972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.545203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.545234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.545429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.545458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.545644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.545674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.545907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.545938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.546094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.546124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.546353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.546383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.546610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.546641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.546863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.546895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.547081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.547111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.547266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.547296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.547515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.547545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.547795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.547825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.548054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.548083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.548228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.548256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.548424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.548688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.548717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.548956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.548985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.549169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.549196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.549358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.549387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.549611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.549639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.549824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.549854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.550080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.550109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.550300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.550329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.550473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.550503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.550666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.550693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.550878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.550906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.054 [2024-07-12 17:14:03.551136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.054 [2024-07-12 17:14:03.551163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.054 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.551342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.551369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.551590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.551618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.551798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.551827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.552058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.552087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.552273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.552301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.552486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.552513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.552745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.552773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.553002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.553029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.553215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.553243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.553415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.553444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.553606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.553638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.553855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.553884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.554067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.554095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.554270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.554298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.554477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.554504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.554748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.554791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.554984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.555332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.555527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.555691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.555926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.555954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.556175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.556202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.556329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.556354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.556539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.556565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.556748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.556777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.556950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.556975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.557206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.557234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.557388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.557415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.557638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.557665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.557820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.557853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.558023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.558052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.558166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.558191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.558411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.558672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.558699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.558940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.558967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.559140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.559165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.559350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.559377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.055 qpair failed and we were unable to recover it. 00:25:04.055 [2024-07-12 17:14:03.559557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.055 [2024-07-12 17:14:03.559584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.559815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.559841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.560062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.560089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.560215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.560240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.560414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.560438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.560654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.560681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.560841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.560866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.561051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.561077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.561224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.561251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.561473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.561500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.561722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.561759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.561977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.562004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.562192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.562222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.562444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.562470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.562644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.562668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.562848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.562874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.563114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.563140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.563362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.563388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.563607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.563633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.563817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.563841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.564057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.564084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.564274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.564300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.564448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.564471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.564696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.564722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.564954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.564981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.565227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.565252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.565393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.565418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.565629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.565655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.565831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.565858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.566057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.566083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.566302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.566327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.566544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.566570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.566709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.566772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.566998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.567039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.567203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.567227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.567381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.567450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.567721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.567805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.568024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.568082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.568324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.568394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.568632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.568684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.056 qpair failed and we were unable to recover it. 00:25:04.056 [2024-07-12 17:14:03.568953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.056 [2024-07-12 17:14:03.568980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.569188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.569258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.569466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.569535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.569813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.569841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.570047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.570410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.570481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.570764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.570817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.571017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.571081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.571362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.571432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.571666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.571718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.572008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.572050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.572338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.572408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.572661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.572721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.572960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.572985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.573145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.573218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.573447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.573517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.573689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.573758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.573935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.573962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.574203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.574255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.574543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.574613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.574893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.574921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.575058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.575130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.575409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.575479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.575713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.575787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.576029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.576055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.576232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.576256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.576541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.576836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.576863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.577104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.577175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.577430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.577499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.577730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.577806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.578056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.578144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.578395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.578466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.578704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.578790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.578920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.578943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.579109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.579456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.579510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.579690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.579713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.579953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.579980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.580181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.580207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.057 qpair failed and we were unable to recover it. 00:25:04.057 [2024-07-12 17:14:03.580386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.057 [2024-07-12 17:14:03.580411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.580611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.580664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.580969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.580996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.581234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.581303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.581575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.581646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.581826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.581853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.582063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.582139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.582381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.582450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.582731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.582819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.582943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.582967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.583217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.583286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.583505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.583573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.583798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.583829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.584043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.584121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.584363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.584432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.584627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.584679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.584921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.584992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.585235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.585304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.585549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.585602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.585885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.585957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.586249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.586319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.586607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.586772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.586825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.587077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.587147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.587447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.587518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.587753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.587806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.588096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.588405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.588474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.588753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.588806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.589061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.589092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.589338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.589370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.589506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.589536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.589709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.589751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.589985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.590016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.590246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.590277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.590503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.058 [2024-07-12 17:14:03.590534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.058 qpair failed and we were unable to recover it. 00:25:04.058 [2024-07-12 17:14:03.590713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.590765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.591000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.591031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.591191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.591221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.591402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.591433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.591626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.591657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.591896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.591928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.592099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.592130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.592282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.592313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.592536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.592568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.592796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.592828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.592996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.593028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.593219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.593251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.593508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.593546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.593772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.593805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.593990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.594025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.594293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.594334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.594543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.594582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.594801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.594843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.595087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.595128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.595338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.595370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.595570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.595606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.595848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.595881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.596124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.596157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.596395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.596428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.596581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.596616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.596806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.596844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.597043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.597080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.597297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.597351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.597623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.597676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.597927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.598182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.598214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.598386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.598417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.598638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.598670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.598898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.598930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.599180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.599211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.599442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.599639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.599670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.599860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.599892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.600120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.600151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.600299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.059 [2024-07-12 17:14:03.600329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.059 qpair failed and we were unable to recover it. 00:25:04.059 [2024-07-12 17:14:03.600569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.600599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.600838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.600871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.601001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.601030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.601212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.601242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.601396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.601426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.601576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.601604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.601795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.601826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.602014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.602212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.602241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.602424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.602456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.602686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.602717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.602976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.603007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.603187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.603218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.603362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.603391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.603594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.603625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.603790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.603819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.604046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.604082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.604278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.604309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.604535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.604786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.604818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.604983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.605014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.605211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.605242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.605487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.605518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.605721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.605762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.605997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.606029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.606262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.606293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.606525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.606557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.606710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.606761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.606983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.607023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.607287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.607328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.607546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.607587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.607841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.607884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.608114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.608157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.608410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.608452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.608703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.608733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.608906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.608937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.609165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.609196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.609330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.609359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.609587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.609618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.060 [2024-07-12 17:14:03.609848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.060 [2024-07-12 17:14:03.609879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.060 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.610061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.610089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.610285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.610317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.610496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.610528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.610714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.610755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.610959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.610989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.611194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.611226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.611454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.611485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.611660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.611691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.611888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.611920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.612120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.612150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.612313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.612343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.612572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.612603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.612841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.612874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.613103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.613134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.613356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.613388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.613612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.613642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.613817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.613850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.614084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.614116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.614313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.614344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.614537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.614568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.614766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.614797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.614992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.615024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.615219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.615249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.615482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.615514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.615703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.615735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.615945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.615976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.616197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.616228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.616419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.616451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.616610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.616639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.616877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.616909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.617155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.617186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.617380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.617412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.617536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.617564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.617788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.617820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.617996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.618027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.618210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.618241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.618416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.618447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.618621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.618651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.618840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.618871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.061 [2024-07-12 17:14:03.619111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.061 [2024-07-12 17:14:03.619142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.061 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.619320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.619351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.619532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.619563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.619770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.619802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.619982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.620016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.620208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.620240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.620473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.620504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.620649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.620678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.620907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.620938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.621081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.621109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.621251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.621280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.621459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.621489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.621665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.621693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.621881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.622139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.622170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.622337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.622368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.622606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.622638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.622826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.622858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.623015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.623044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.623177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.623206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.623397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.623429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.623620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.623651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.623771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.623800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.624024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.624055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.624214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.624243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.624437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.624469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.624698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.624729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.624869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.624898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.625088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.625119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.625270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.625299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.625550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.625582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.625758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.625791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.626033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.626065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.626306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.626337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.626570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.062 [2024-07-12 17:14:03.626601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.062 qpair failed and we were unable to recover it. 00:25:04.062 [2024-07-12 17:14:03.626811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.626853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.626974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.627003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.627176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.627204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.627442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.627474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.627667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.627703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.627873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.627903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.628088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.628130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.628300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.628329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.628472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.628503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.628725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.628771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.628914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.628945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.629123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.629155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.629335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.629366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.629539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.629569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.629785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.629827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.629997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.630028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.630243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.630275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.630411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.630442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.630621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.630652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.630875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.630908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.631134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.631165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.631393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.631424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.631617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.631648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.631866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.631899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.632127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.632158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.632367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.632398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.632627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.632658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.632804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.632836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.633051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.633082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.633302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.633529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.633568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.633802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.633835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.634013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.634044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.634166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.634423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.634583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.634613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.634802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.634834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.635055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.635086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.635271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.635302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.063 qpair failed and we were unable to recover it. 00:25:04.063 [2024-07-12 17:14:03.635474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.063 [2024-07-12 17:14:03.635506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.635681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.635713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.635913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.635945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.636168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.636390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.636422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.636578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.636610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.636765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.636797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.637019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.637050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.637265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.637297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.637467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.637508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.637692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.637728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.637927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.637971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.638151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.638183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.638397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.638428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.638606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.638638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.638877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.638910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.639076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.639105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.639292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.639321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.639556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.639587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.639813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.639845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.639997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.640029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.640186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.640218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.640408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.640439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.640638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.640669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.640854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.640897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.641095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.641127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.641318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.641349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.641522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.641553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.641745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.641777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.641960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.641991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.642128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.642157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.642338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.642368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.642522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.642553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.642778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.642811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.643044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.643075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.643296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.643327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.643473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.643502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.643694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.643732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.643893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.643924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.644141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.064 [2024-07-12 17:14:03.644348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.064 [2024-07-12 17:14:03.644378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.064 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.644547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.644578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.644723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.644761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.644895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.644923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.645120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.645151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.645368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.645399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.645587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.645619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.645806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.645838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.645991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.646022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.646213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.646244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.646422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.646457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.646680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.646711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.646863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.646895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.647092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.647123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.647239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.647268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.647463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.647495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.647717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.647758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.647942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.647974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.648155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.648186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.648350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.648556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.648586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.648775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.648808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.648982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.649021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.649217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.649248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.649483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.649514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.649663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.649694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.649886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.649917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.650132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.650162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.650338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.650369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.650597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.650629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.650770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.650800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.650977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.651007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.651193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.651366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.651395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.651577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.651608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.651772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.651804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.651988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.652020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.652249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.652281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.652497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.652528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.652748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.652778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.065 [2024-07-12 17:14:03.652958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.065 [2024-07-12 17:14:03.652989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.065 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.653203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.653233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.653381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.653412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.653603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.653634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.653848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.653880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.654009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.654037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.654217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.654248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.654391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.654423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.654653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.654684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.654879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.654911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.655090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.655125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.655349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.655380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.655570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.655601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.655772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.655803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.655980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.656011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.656188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.656219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.656392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.656423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.656614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.656644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.656836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.656866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.657004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.657035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.657232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.657263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.657441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.657472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.657695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.657726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.657957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.657988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.658211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.658243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.658450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.658481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.658714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.658761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.658954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.658986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.659173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.659203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.659386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.659417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.659588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.659619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.659805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.659837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.660038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.660070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.660248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.660279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.660467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.660498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.660717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.660756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.660936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.660968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.661155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.661185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.661381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.661412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.661636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.066 [2024-07-12 17:14:03.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.066 qpair failed and we were unable to recover it. 00:25:04.066 [2024-07-12 17:14:03.661895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.661926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.662103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.662134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.662303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.662334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.662558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.662782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.662814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.663042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.663073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.663285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.663316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.663486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.663517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.663735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.663776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.663925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.663954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.664134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.664169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.664388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.664419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.664592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.664623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.664851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.664883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.665124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.665155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.665328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.665359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.665536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.665567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.665753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.665785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.665959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.665990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.666171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.666202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.666381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.666412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.666601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.666632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.666829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.666860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.667077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.667109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.667278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.667310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.667448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.667477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.667656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.667688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.667927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.667959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.668086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.668115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.668269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.668300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.668536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.668567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.668792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.668824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.668999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.669029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.669192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.669223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.669451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.669482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.669706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.669745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.067 [2024-07-12 17:14:03.669939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.067 [2024-07-12 17:14:03.669970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.067 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.670187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.670218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.670346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.670375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.670540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.670571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.670712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.670751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.670935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.670966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.671171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.671202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.671415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.671446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.671563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.671594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.671775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.671993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.672023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.672232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.672262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.672437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.672468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.672633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.672663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.672791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.672826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.672987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.673019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.673186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.673218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.673342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.673371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.673533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.673564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.673757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.673789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.674015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.674047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.674229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.674260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.674468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.674499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.674673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.674703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.674889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.674919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.675086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.675282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.675312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.675490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.675521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.675748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.675780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.675962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.675992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.676204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.676235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.676407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.676437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.676649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.676680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.676853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.676885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.677100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.677131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.677283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.677313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.677460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.677489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.677630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.677661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.677803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.677833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.678010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.678040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.068 [2024-07-12 17:14:03.678235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.068 [2024-07-12 17:14:03.678265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.068 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.678478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.678510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.678646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.678675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.678881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.678913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.679092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.679122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.679278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.679308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.679486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.679517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.679697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.679727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.679871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.679900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.680960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.680996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.681171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.681202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.681371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.681402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.681526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.681554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.681673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.681930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.681961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.682136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.682166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.682342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.682372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.682541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.682571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.682711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.682749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.682927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.682958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.683138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.683168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.683376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.683406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.683543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.683571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.683752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.683783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.683920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.683950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.684163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.684193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.684373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.684404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.684573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.684604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.684756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.684786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.684918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.684949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.685164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.685195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.685373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.685573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.685602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.685766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.685797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.685935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.685965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.686110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.069 [2024-07-12 17:14:03.686142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.069 qpair failed and we were unable to recover it. 00:25:04.069 [2024-07-12 17:14:03.686284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.686315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.686527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.686558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.686686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.686714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.686854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.686885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.687097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.687127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.687269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.687300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.687513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.687545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.687697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.687727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.687967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.687998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.688139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.688167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.688317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.688347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.688578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.688609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.688816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.688858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.689077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.689113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.689295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.689327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.689557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.689589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.689766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.689795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.689957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.689987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.690193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.690224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.690381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.690412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.690630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.690661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.690826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.690858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.690978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.691168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.691368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.691531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.691776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.691952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.691984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.692161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.692192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.692329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.692359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.692537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.692569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.692764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.692796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.693011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.693042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.693229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.693260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.693470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.693501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.693685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.693716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.693867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.693898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.694057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.694088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.694306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.694338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.694489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.070 [2024-07-12 17:14:03.694519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.070 qpair failed and we were unable to recover it. 00:25:04.070 [2024-07-12 17:14:03.694735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.694774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.694949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.694980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.695160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.695190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.695362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.695393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.695565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.695595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.695735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.695773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.695901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.695931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.696106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.696137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.696323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.696353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.696562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.696592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.696755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.696787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.696967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.696998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.697179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.697210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.697420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.697456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.697600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.697630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.697810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.697840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.698010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.698041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.698217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.698248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.698424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.698455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.698598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.698627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.698839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.698870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.699939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.699968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.700142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.700365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.700395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.700606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.700637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.700815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.700847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.700963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.700992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.701212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.701242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.701447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.701477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.701601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.071 [2024-07-12 17:14:03.701631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-07-12 17:14:03.701765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.701795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.701962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.702205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.702235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.702381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.702412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.702578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.702608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.702750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.702780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.702961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.702991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.703166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.703195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.703397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.703428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.703605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.703636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.703808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.703840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.704055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.704086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.704250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.704281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.704493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.704524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.704705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.704735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.704886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.704916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.705122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.705153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.705326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.705357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.705539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.705575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.705769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.705802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.705981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.706012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.706189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.706219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.706431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.706462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.706644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.706675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.706859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.706890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.707113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.707267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.707473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.707687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.707867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.707997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.708028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.708196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.708236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.708459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.708490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.708716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.708755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.708956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.708987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.709196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.709226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.709408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.709438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.709570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.709599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.709728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.709774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.709946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.072 [2024-07-12 17:14:03.709977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-07-12 17:14:03.710152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.710183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.710361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.710392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.710580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.710610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.710823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.710856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.711030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.711061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.711232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.711263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.711450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.711480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.711663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.711697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.711888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.711919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.712086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.712117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.712341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.712372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.712489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.712518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.712682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.712711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.712934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.712965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.713154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.713184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.713327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.713359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.713478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.713512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.713697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.713728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.713897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.714155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.714186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.714322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.714353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.714473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.714503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.714688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.714718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.714890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.714921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.715097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.715139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.715273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.715302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-07-12 17:14:03.715472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.073 [2024-07-12 17:14:03.715503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.715670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.715701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.715854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.715886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.716061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.716092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.716263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.716294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.716477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.716509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.716691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.716722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.716909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.716944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.717117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.717147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.717308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.717338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.717484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.717517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.717693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.717724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.717910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.717940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.718105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.718136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.718302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.718334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.718508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.718541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.718756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.718788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.718926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.718957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.719130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.719160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.719377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.719409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.719628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.719675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.719929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.719962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.720118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.720270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.720464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.720830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.720962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.721174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.721357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.721549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.721704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.721926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.721957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.360 qpair failed and we were unable to recover it. 00:25:04.360 [2024-07-12 17:14:03.722127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.360 [2024-07-12 17:14:03.722158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.722291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.722491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.722521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.722694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.722725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.722873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.722905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.723118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.723149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.723361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.723392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.723524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.723554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.723730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.723768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.723942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.723981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.724208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.724240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.724358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.724386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.724528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.724556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.724694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.724726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.724948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.724980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.725155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.725198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.725372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.725400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.725580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.725610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.725786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.725818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.725975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.726006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.726212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.726243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.726370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.726398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.726600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.726631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.726812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.726844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.727019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.727050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.727224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.727255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.727423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.727458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.727660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.727691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.727873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.727904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.728075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.728107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.728304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.728336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.728501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.728533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.728645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.728676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.728832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.728863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.729015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.729045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.729253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.729285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.729415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.729446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.729637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.729669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.729867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.729899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.730057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.730087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.361 qpair failed and we were unable to recover it. 00:25:04.361 [2024-07-12 17:14:03.730238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.361 [2024-07-12 17:14:03.730270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.730407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.730438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.730605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.730636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.730837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.730868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.731038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.731068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.731243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.731274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.731416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.731446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.731648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.731679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.731854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.731885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.732041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.732247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.732279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.732490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.732521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.732671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.732701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.732919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.732951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.733147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.733177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.733336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.733367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.733521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.733551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.733714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.733754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.733932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.733963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.734146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.734177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.734351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.734388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.734536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.734567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.734722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.734761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.734942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.734972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.735182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.735213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.735392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.735423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.735629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.735666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.735817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.735849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.735996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.736026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.736231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.736261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.736400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.736639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.736670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.736803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.736834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.736993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.737235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.737396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.737590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.737765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.737941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.737972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.738151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.738182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.738340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.362 [2024-07-12 17:14:03.738371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.362 qpair failed and we were unable to recover it. 00:25:04.362 [2024-07-12 17:14:03.738573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.738603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.738771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.738803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.738968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.739000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.739207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.739239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.739409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.739441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.739608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.739639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.739801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.739833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.740027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.740057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.740255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.740285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.740450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.740481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.740612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.740643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.740859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.740891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.741073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.741103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.741282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.741312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.741446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.741475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.741649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.741679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.741839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.741870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.742038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.742069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.742229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.742260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.742428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.742458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.742664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.742696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.742862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.742894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.743045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.743076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.743278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.743308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.743483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.743513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.743645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.743679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.743851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.743883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.744055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.744086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.744287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.744317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.744477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.744507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.744671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.744702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.744853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.744884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.745117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.745271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.745449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.745647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.745841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.745983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.746013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.746219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.746250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.746426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.746457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.363 qpair failed and we were unable to recover it. 00:25:04.363 [2024-07-12 17:14:03.746582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.363 [2024-07-12 17:14:03.746613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.746784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.746815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.746978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.747010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.747184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.747215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.747406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.747437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.747601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.747643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.747803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.747834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.748951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.748982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.749178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.749208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.749408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.749439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.749673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.749703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.749880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.749911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.750056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.750090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.750242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.750273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.750467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.750498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.750754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.750794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.750924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.750954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.751161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.751192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.751402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.751432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.751612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.751642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.751769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.751805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.751919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.751947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.752121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.752152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.752318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.752349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.752543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.752574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.752757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.752788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.752940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.752970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.753094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.753125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.753260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.364 [2024-07-12 17:14:03.753288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.364 qpair failed and we were unable to recover it. 00:25:04.364 [2024-07-12 17:14:03.753414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.753443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.753642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.753672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.753819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.753851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.754052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.754239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.754480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.754627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.754820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.754978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.755183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.755345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.755703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.755918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.755950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.756258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.756289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.756502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.756532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.756681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.756713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.756861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.756892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.757053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.757083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.757227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.757259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.757425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.757456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.757616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.757645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.757818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.757849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.758026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.758058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.758235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.758266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.758439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.758469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.758674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.758704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.758865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.759069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.759101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.759277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.759307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.759518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.759553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.759724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.759761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.759923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.759953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.760105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.760136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.760336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.760367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.760516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.760546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.760718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.760755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.760923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.760953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.761078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.365 [2024-07-12 17:14:03.761109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.365 qpair failed and we were unable to recover it. 00:25:04.365 [2024-07-12 17:14:03.761254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.761284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.761446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.761476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.761632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.761663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.761826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.761858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.762037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.762067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.762274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.762304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.762465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.762496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.762656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.762687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.762863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.762894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.763109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.763277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.763307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.763543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.763574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.763747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.763777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.763955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.763987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.764164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.764195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.764391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.764421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.764586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.764616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.764749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.764778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.764917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.764946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.765142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.765293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.765484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.765676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.765834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.765992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.766022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.766147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.766175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.766383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.766414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.766582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.766612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.766785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.766816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.766979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.767009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.767163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.767194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.767396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.767432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.767569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.767599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.767808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.767839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.767981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.768011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.768177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.768209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.768372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.768403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.768578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.768783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.768814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.768978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.366 [2024-07-12 17:14:03.769008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.366 qpair failed and we were unable to recover it. 00:25:04.366 [2024-07-12 17:14:03.769175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.769206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.769365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.769396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.769530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.769565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.769761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.769794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.769925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.769956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.770158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.770189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.770350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.770381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.770547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.770579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.770755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.770787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.770902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.770933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.771106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.771136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.771297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.771328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.771484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.771514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.771685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.771716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.771890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.771920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.772096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.772126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.772322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.772353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.772478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.772507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.772661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.772691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.772863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.772895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.773946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.774172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.774203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.774403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.774434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.774569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.774599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.774759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.774790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.774979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.775174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.775343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.775523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.775684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.775884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.775915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.776105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.776134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.776342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.776372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.776527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.776557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.776771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.367 [2024-07-12 17:14:03.776803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.367 qpair failed and we were unable to recover it. 00:25:04.367 [2024-07-12 17:14:03.776929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.776960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.777121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.777151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.777298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.777528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.777559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.777697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.777727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.777894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.777925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.778132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.778163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.778326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.778357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.778521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.778552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.778718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.778758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.778902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.778932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.779100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.779131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.779295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.779325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.779495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.779526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.779675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.779705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.779877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.779909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.780067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.780098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.780250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.780281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.780457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.780488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.780658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.780689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.780862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.780894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.781067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.781226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.781257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.781422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.781452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.781649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.781680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.781851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.781883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.782054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.782085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.782254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.782284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.782427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.782457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.782591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.782621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.782779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.783012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.783048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.783207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.783237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.783404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.783434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.783573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.783603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.368 [2024-07-12 17:14:03.783807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.368 [2024-07-12 17:14:03.783839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.368 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.784036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.784067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.784268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.784300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.784427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.784458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.784642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.784673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.784846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.784878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.785060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.785091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.785319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.785351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.785500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.785531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.785698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.785729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.785987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.786188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.786372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.786525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.786717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.786941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.786972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.787109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.787141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.787339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.787369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.787500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.787532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.787638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.787669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.787864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.787896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.788108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.788307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.788636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.788793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.788970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.789000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.789175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.789206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.789377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.789408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.789566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.789596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.789754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.789785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.789983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.790014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.790214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.790245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.790438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.790468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.790637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.790667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.790839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.790871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.791040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.791075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.791205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.791236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.791394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.791425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.791595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.791625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.791817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.369 [2024-07-12 17:14:03.791849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.369 qpair failed and we were unable to recover it. 00:25:04.369 [2024-07-12 17:14:03.792008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.792039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.792236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.792267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.792389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.792420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.792580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.792808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.792840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.793005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.793036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.793205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.793236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.793437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.793468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.793662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.793694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.793880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.793912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.794064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.794267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.794655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.794819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.794979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.795169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.795398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.795547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.795750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.795916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.795947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.796113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.796310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.796500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.796650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.796831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.796987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.797148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.797347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.797518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.797703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.797941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.797971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.798138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.798168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.798339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.798368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.798561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.798592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.798718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.798760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.798956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.798986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.799147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.799177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.799383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.799414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.799568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.370 [2024-07-12 17:14:03.799599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.370 qpair failed and we were unable to recover it. 00:25:04.370 [2024-07-12 17:14:03.799712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.799749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.799916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.799946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.800107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.800138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.800292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.800323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.800487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.800517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.800648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.800678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.800844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.800876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.801106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.801266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.801431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.801622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.801810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.801990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.802151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.802342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.802536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.802709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.802892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.802924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.803067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.803098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.803296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.803327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.803487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.803517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.803677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.803708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.803852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.803884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.804925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.804968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.805141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.805172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.805322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.805353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.805551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.805582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.805699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.805727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.805864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.806923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.806951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.371 [2024-07-12 17:14:03.807083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.371 [2024-07-12 17:14:03.807123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.371 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.807297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.807327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.807468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.807498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.807695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.807893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.807924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.808049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.808077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.808245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.808276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.808446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.808477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.808663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.808694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.808875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.808906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.809072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.809102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.809234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.809262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.809464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.809495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.809644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.809675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.809804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.809834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.810957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.810988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.811155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.811185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.811364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.811394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.811591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.811621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.811819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.811851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.812013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.812044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.812252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.812282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.812478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.812508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.812706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.812745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.812878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.812908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.813090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.813301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.813331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.813498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.813527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.813699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.813729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.813896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.813928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.814080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.814117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.814318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.814348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.814471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.814501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.814709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.814751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.814944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.814976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.815177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.815208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.372 qpair failed and we were unable to recover it. 00:25:04.372 [2024-07-12 17:14:03.815412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.372 [2024-07-12 17:14:03.815444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.815558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.815588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.815754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.815785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.815923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.815953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.816120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.816320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.816350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.816522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.816553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.816721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.816761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.816932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.817142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.817173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.817340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.817370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.817568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.817599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.817768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.817800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.818003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.818035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.818234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.818264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.818443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.818473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.818653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.818683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.818808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.818838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.819952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.819982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.820147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.820177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.820349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.820379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.820545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.820575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.820700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.820730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.820906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.820936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.821061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.821091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.821263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.821295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.821474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.821505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.821665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.821694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.821865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.821896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.822043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.822074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.373 [2024-07-12 17:14:03.822247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.373 [2024-07-12 17:14:03.822278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.373 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.822479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.822510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.822675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.822705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.822891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.822923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.823149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.823315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.823481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.823670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.823822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.823989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.824185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.824337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.824545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.824749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.824899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.824929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.825128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.825160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.825301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.825332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.825475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.825665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.825696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.825826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.825855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.826058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.826089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.826223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.826253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.826449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.826717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.826755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.826891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.826922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.827095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.827124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.827292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.827326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.827497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.827527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.827688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.827719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.827910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.827941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.828056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.828083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.828279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.828310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.828460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.828491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.828618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.828649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.828842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.828873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.829968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.374 [2024-07-12 17:14:03.829999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.374 qpair failed and we were unable to recover it. 00:25:04.374 [2024-07-12 17:14:03.830122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.830152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.830363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.830394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.830605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.830636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.830822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.830854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.831076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.831107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.831266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.831301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.831416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.831444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.831637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.831846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.831877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.832059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.832096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.832252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.832282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.832425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.832454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.832631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.832662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.832819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.832851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.833046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.833077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.833211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.833241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.833414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.833445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.833601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.833632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.833828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.833860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.834014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.834044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.834240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.834271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.834413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.834444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.834609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.834640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.834838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.834869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.835100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.835303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.835497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.835645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.835829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.835971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.836128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.836325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.836508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.836703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.836926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.836957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.837126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.837156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.837359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.837390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.837554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.837585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.837708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.837748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.837941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.837973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.375 [2024-07-12 17:14:03.838180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.375 [2024-07-12 17:14:03.838210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.375 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.838368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.838398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.838599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.838629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.838798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.838830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.838949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.838979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.839119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.839149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.839344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.839375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.839517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.839547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.839704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.839735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.839911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.839942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.840115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.840145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.840304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.840336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.840461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.840492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.840688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.840718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.840923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.840954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.841128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.841157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.841292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.841323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.841442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.841472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.841669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.841698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.841870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.841902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.842036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.842067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.842218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.842248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.842455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.842485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.842633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.842663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.842863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.842894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.843050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.843216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.843457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.843636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.843831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.843971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.844164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.844388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.844619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.844777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.844932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.844960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.845088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.845119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.845330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.845361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.845517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.845546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.845721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.845759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.845928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.845958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.376 qpair failed and we were unable to recover it. 00:25:04.376 [2024-07-12 17:14:03.846182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.376 [2024-07-12 17:14:03.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.846383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.846425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.846628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.846659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.846828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.846859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.846990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.847020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.847184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.847215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.847413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.847443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.847585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.847614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.847806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.847836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.848032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.848222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.848394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.848577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.848808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.848981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.849011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.849135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.849163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.849288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.849318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.849538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.849569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.849768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.849799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.850905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.850939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.851120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.851151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.851344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.851375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.851532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.851562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.851670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.851700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.851860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.851891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.852119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.852315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.852508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.852858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.377 [2024-07-12 17:14:03.852994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.377 [2024-07-12 17:14:03.853024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.377 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.853187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.853217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.853368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.853399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.853568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.853598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.853749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.853779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.853975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.854135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.854361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.854551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.854748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.854936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.854968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.855110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.855140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.855294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.855322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.855488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.855520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.855689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.855719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.855914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.855944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.856117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.856147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.856257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.856288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.856456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.856485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.856682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.856712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.856885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.856916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.857116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.857147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.857351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.857383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.857523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.857554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.857719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.857758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.857929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.857959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.858155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.858356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.858387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.858565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.858596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.858728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.858770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.858975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.859005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.859216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.859247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.859418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.859448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.859627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.859658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.859831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.859862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.378 [2024-07-12 17:14:03.860954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.378 [2024-07-12 17:14:03.860993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.378 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.861907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.861937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.862924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.862954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.863879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.863909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.864102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.864270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.864474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.864673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.864841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.864983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.865175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.865350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.865526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.865681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.865848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.865884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.866930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.866961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.867115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.867309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.867464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.867620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.379 [2024-07-12 17:14:03.867791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.379 qpair failed and we were unable to recover it. 00:25:04.379 [2024-07-12 17:14:03.867933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.867964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.868949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.868979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.869968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.869997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.870100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.870131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.870270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.870299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.870468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.870497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.870660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.870690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.870863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.870895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.871962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.871991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.872164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.872195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.872335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.872364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.872531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.872560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.872661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.872692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.872858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.872893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.873912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.873942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.380 [2024-07-12 17:14:03.874966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.380 [2024-07-12 17:14:03.874995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.380 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.875130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.875289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.875319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.875455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.875484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.875661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.875691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.875837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.875868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.876898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.876929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.877056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.877086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.877262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.877291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.877659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.877701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.877883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.877914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.878952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.878983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.879118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.879147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.879308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.879338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.879441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.879471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.879612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.879642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.879807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.879837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.880852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.880882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.881024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.881053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.881150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.881185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.381 qpair failed and we were unable to recover it. 00:25:04.381 [2024-07-12 17:14:03.881327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.381 [2024-07-12 17:14:03.881356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.881469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.881499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.881663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.881692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.881877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.881907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.882056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.882087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.882228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.882258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.882405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.882436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.882655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.882686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.882824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.883873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.883902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.884899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.884928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.885118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.885148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.885296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.885327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.885519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.885548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.885709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.885755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.885928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.885960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.886126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.886157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.886316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.886346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.886509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.886539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.886708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.886748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.886924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.886955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.887102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.887132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.887286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.887321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.887453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.887483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.887637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.887667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.887859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.887889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.888017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.888048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.888238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.888269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.888432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.888461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.888590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.888620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.382 qpair failed and we were unable to recover it. 00:25:04.382 [2024-07-12 17:14:03.888810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.382 [2024-07-12 17:14:03.888841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.888958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.889140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.889170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.889358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.889389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.889497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.889528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.889660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.889690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.889831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.889862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.890942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.890972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.891898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.891929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.892090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.892278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.892308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.892496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.892526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.892646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.892676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.892831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.892863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.893046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.893239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.893432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.893584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.893782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.893972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.894165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.894384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.894555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.894713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.894894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.894924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.895962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.383 [2024-07-12 17:14:03.895992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.383 qpair failed and we were unable to recover it. 00:25:04.383 [2024-07-12 17:14:03.896149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.896304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.896334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.896488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.896519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.896648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.896677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.896842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.896873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.897964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.897995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.898163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.898193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.898351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.898381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.898507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.898537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.898685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.898715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.898914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.898945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.899049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.899079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.899227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.899257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.899422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.899452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.899602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.899632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.899824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.899856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.900037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.900228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.900421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.900597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.900787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.900977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.901175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.901397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.901549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.901735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.901938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.902182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.902213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.902383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.902413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.902542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.902572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.902692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.902723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.902858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.902887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.903078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.903108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.903267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.903298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.903491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.903521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.384 [2024-07-12 17:14:03.903667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.384 [2024-07-12 17:14:03.903697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.384 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.903857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.903887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.904958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.904988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.905183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.905214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.905346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.905377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.905568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.905598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.905799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.905959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.905989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.906106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.906137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.906340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.906370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.906565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.906595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.906788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.906819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.907010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.907040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.907237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.907267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.907458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.907489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.907649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.907680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.907824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.907855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.908940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.908971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.909114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.909144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.909318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.909349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.909521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.909551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.909688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.909722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.909897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.909928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.910125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.910154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.910311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.910341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.910506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.910537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.910645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.910674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.910890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.385 [2024-07-12 17:14:03.910921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.385 qpair failed and we were unable to recover it. 00:25:04.385 [2024-07-12 17:14:03.911122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.911153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.911388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.911419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.911564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.911594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.911768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.911810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.911968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.911999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.912147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.912177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.912327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.912357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.912486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.912530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.912745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.912776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.912903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.912933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.913078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.913305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.913480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.913662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.913812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.913975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.914113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.914301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.914523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.914707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.914914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.914945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.915143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.915174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.915368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.915398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.915552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.915594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.915792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.915823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.915979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.916157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.916340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.916560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.916752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.916935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.916966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.917116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.917146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.917309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.917340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.917497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.917532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.917687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.917717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.917896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.917926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.386 [2024-07-12 17:14:03.918095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.386 [2024-07-12 17:14:03.918125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.386 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.918315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.918346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.918458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.918489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.918638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.918668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.918836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.918868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.919935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.919965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.920134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.920164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.920353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.920384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.920539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.920569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.920715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.920751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.920906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.920936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.921128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.921159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.921363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.921394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.921582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.921613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.921774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.921806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.921967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.921996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.922160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.922191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.922383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.922414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.922561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.922590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.922714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.922756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.922877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.922907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.923091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.923122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.923251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.923281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.923468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.923498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.923696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.923727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.923933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.923964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.924157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.924188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.924382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.924413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.924602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.924631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.924793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.924824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.925014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.925043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.925199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.925229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.925376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.925405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.387 qpair failed and we were unable to recover it. 00:25:04.387 [2024-07-12 17:14:03.925546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.387 [2024-07-12 17:14:03.925577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.925711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.925747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.925938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.925968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.926157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.926381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.926412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.926574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.926604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.926762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.926792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.926949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.926979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.927141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.927172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.927332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.927362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.927492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.927533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.927743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.927775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.927936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.927967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.928138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.928169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.928365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.928395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.928525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.928554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.928751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.928783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.928931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.928962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.929113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.929143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.929273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.929303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.929493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.929524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.929722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.929769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.929899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.929930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.930120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.930150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.930346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.930376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.930536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.930566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.930760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.930796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.930953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.930984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.931146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.931178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.931338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.931368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.931522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.931553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.931715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.931752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.931915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.931945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.932143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.932172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.932335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.932364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.932532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.932563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.932721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.932757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.932914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.932944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.933137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.933167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.933327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.933356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.933553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.388 [2024-07-12 17:14:03.933584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.388 qpair failed and we were unable to recover it. 00:25:04.388 [2024-07-12 17:14:03.933774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.933805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.933968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.933998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.934127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.934157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.934326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.934356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.934554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.934585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.934786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.934818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.935959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.935990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.936156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.936185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.936355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.936385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.936546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.936576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.936751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.936783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.936936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.936967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.937113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.937142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.937326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.937356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.937554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.937584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.937690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.937719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.937900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.937930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.938091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.938121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.938312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.938342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.938507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.938539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.938697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.938732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.938935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.938965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.939125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.939155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.939317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.939348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.939543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.939573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.939747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.939779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.939971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.940001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.940196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.940225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.940418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.940448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.940641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.940672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.940841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.940872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.941065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.941096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.941295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.941326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.941487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.941516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.941633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.941663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.389 [2024-07-12 17:14:03.941852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.389 [2024-07-12 17:14:03.941882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.389 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.942077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.942107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.942269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.942299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.942457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.942487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.942592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.942622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.942842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.942874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.943083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.943113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.943283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.943313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.943477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.943508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.943665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.943695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.943870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.943900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.944074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.944104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.944305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.944335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.944531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.944561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.944728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.944768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.944976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.945007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.945202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.945233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.945393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.945423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.945573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.945603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.945732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.945776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.945970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.946001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.946191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.946222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.946418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.946448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.946646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.946676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.946839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.946870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.947920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.947950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.948098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.948129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.948285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.948316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.948479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.948509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.948703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.948733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.390 qpair failed and we were unable to recover it. 00:25:04.390 [2024-07-12 17:14:03.948904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.390 [2024-07-12 17:14:03.948935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.949133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.949163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.949332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.949361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.949527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.949730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.949776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.949978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.950009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.950173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.950204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.950339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.950370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.950571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.950602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.950802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.950834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.950988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.951019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.951218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.951249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.951373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.951402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.951601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.951632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.951788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.951820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.951991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.952022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.952217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.952248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.952429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.952460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.952659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.952690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.952897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.952929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.953125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.953156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.953351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.953381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.953552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.953583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.953792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.953823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.953961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.953992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.954119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.954150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.954318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.954349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.954495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.954719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.954760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.954943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.954974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.955174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.955209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.955393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.955425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.955627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.955658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.955875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.955906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.956104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.956135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.956309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.956340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.956542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.956572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.956743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.956775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.956940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.956971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.957092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.957121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.957323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.391 [2024-07-12 17:14:03.957354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.391 qpair failed and we were unable to recover it. 00:25:04.391 [2024-07-12 17:14:03.957555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.957586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.957718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.957763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.957918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.957949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.958150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.958180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.958340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.958371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.958501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.958531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.958690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.958720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.958837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.958868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.959032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.959062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.959272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.959303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.959467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.959498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.959661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.959692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.959878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.960106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.960137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.960300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.960330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.960544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.960575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.960780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.960811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.960970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.961001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.961174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.961205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.961348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.961379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.961588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.961619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.961751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.961781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.961977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.962161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.962317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.962504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.962744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.962901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.962930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.963101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.963130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.963325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.963359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.963528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.963558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.963756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.963787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.963948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.963978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.964950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.964979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.965145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.965176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.392 [2024-07-12 17:14:03.965369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.392 [2024-07-12 17:14:03.965400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.392 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.965574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.965604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.965769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.965800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.965937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.965966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.966129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.966160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.966310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.966340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.966540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.966571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.966730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.966769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.966974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.967005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.967167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.967198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.967365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.967395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.967585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.967616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.967817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.967848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.968967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.968997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.969167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.969197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.969398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.969429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.969590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.969620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.969789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.969821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.969985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.970015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.970162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.970193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.970403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.970434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.970630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.970660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.970800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.970830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.970982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.971012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.971172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.971206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.971374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.971405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.971611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.971642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.971765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.971794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.972943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.972974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.973145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.973176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.973340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.393 [2024-07-12 17:14:03.973371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.393 qpair failed and we were unable to recover it. 00:25:04.393 [2024-07-12 17:14:03.973544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.973573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.973769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.973799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.973956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.973986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.974188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.974218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.974380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.974411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.974621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.974651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.974853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.974884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.974999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.975029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.975230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.975260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.975461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.975492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.975624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.975652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.975772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.975802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.975969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.976129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.976369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.976561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.976754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.976957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.976988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.977186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.977342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.977370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.977572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.977602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.977811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.977842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.977994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.978025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.978193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.978223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.978385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.978415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.978620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.978651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.978812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.978844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.979039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.979069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.979268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.979302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.979500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.979529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.979722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.979765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.979929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.979958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.980121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.980151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.980362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.980393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.394 [2024-07-12 17:14:03.980593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.394 [2024-07-12 17:14:03.980624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.394 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.980771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.980800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.980930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.980959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.981163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.981194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.981394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.981424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.981593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.981623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.981789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.981973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.982004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.982211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.982242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.982405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.982435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.982601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.982632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.982794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.982825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.985964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.986034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.986300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.986366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.986579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.986641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.986834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.986880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.987076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.987138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.987341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.987404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.987598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.987642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.987814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.987882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.988111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.988175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.988429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.988493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.988716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.988775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.988980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.989043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.989300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.989361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.989610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.989674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.989900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.989965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.990207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.990271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.990526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.990588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.990840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.990906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.991145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.991207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.991413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.991477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.991634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.991677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.991887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.991951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.992170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.992240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.992478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.992521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.992682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.992726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.992973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.993017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.993223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.993286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.993516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.993578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.993820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.395 [2024-07-12 17:14:03.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.395 qpair failed and we were unable to recover it. 00:25:04.395 [2024-07-12 17:14:03.994164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.994226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.994503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.994564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.994825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.994871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.995084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.995143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.995431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.995494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.995723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.995778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.996033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.996096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.996344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.996406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.996638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.996682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.996901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.996965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.997221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.997284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.997498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.997560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.997783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.997828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.998072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.998135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.998386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.998449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.998682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.998726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.998997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.999066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.999269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.999331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.999557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.999619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:03.999850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:03.999913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.000164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.000226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.000462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.000525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.000768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.000813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.001069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.001133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.001344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.001406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.001636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.001680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.001900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.001963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.002186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.002249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.002468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.002530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.002732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.002788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.003038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.003101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.003359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.003422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.003671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.003715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.003981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.004051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.004275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.004336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.004554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.004617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.004797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.005110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.005172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.005388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.005451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.005649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.005693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.396 [2024-07-12 17:14:04.005957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.396 [2024-07-12 17:14:04.006021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.396 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.006283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.006344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.006597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.006658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.006918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.006980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.007196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.007261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.007513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.007577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.007797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.007868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.008142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.008205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.008467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.008530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.008733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.008790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.009037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.009104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.009363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.009424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.009626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.009670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.009899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.009944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.010206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.010270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.010541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.010602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.010850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.010913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.011172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.011235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.011492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.011555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.011754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.011817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.012107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.012322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.012385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.012634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.012678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.012954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.013018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.013273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.013337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.013614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.013675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.013948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.014011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.014229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.014292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.014518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.014581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.014814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.014880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.015142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.015205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.015458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.015521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.015779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.015824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.016089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.016157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.016379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.016442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.016682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.016726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.017012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.017081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.017346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.397 [2024-07-12 17:14:04.017410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.397 qpair failed and we were unable to recover it. 00:25:04.397 [2024-07-12 17:14:04.017668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.017712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.017951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.017995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.018205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.018267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.018521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.018840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.018904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.019157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.019220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.019476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.019538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.019780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.019825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.020054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.020149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.020362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.020424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.020675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.020718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.020921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.020965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.021233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.021296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.021512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.021544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.021699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.021730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.021898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.021932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.022108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.022139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.022331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.022362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.022582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.022614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.022761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.022792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.022950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.022981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.023128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.023159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.023408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.023439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.023614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.023644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.023790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.023820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.023983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.024022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.024248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.024279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.024494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.024524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.024707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.024744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.024883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.024913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.398 [2024-07-12 17:14:04.025049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.398 [2024-07-12 17:14:04.025084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.398 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.025257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.025297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.025487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.025518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.025656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.025685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.025884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.025916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.026046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.026096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.026287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.026319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.026512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.026546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.026691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.026722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.026882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.026914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.027058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.027087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.027341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.027374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.027543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.027581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.027789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.027821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.027937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.672 [2024-07-12 17:14:04.027969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.672 qpair failed and we were unable to recover it. 00:25:04.672 [2024-07-12 17:14:04.028166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.028197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.028375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.028406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.028596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.028631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.028852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.028885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.029028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.029066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.029246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.029278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.029436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.029466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.029634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.029665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.029827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.029859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.030021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.030052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.030290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.030321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.030494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.030531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.030699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.030729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.030898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.030928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.031106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.031137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.031264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.031292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.031472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.031502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.031668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.031700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.031848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.031879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.032041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.032077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.032294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.032324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.032536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.032568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.032685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.032713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.032891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.032922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.033063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.033093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.033272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.033510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.033541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.033756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.033788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.033937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.033968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.034102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.034142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.034352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.034387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.034595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.034625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.034805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.034835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.035904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.673 [2024-07-12 17:14:04.035933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.673 qpair failed and we were unable to recover it. 00:25:04.673 [2024-07-12 17:14:04.036101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.036131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.036296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.036325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.036534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.036565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.036767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.036828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.037006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.037064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.037381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.037458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.037766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.037827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.038066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.038131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.038419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.038483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.038759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.038825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.039045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.039110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.039377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.039441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.039708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.039810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.039999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.040055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.040304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.040369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.040704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.040811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.041016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.041072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.041328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.041391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.041634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.041708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.041946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.041979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.042164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.042228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.042506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.042571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.042827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.042862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.043068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.043125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.043326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.043391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.043674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.043750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.044023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.044351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.044415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.044702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.044793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.044992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.045025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.045306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.045380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.045668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.045732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.045983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.046018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.046269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.046332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.046581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.046644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.046932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.046967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.047182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.047246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.047526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.047590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.047865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.048067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.048111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.048342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.048407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.048668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.048731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.048988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.049022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.049206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.049249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.049455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.674 [2024-07-12 17:14:04.049519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.674 qpair failed and we were unable to recover it. 00:25:04.674 [2024-07-12 17:14:04.049807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.049841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.050060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.050122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.050439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.050504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.050789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.050832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.051104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.051169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.051376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.051437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.051758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.051824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.052081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.052145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.052435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.052499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.052795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.052840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.053113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.053157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.053419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.053483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.053799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.053843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.054111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.054185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.054496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.054561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.054876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.054920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.055147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.055211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.055501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.055565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.055891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.055956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.056246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.056309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.056581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.056646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.056917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.056982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.057234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.057298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.057590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.057654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.057971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.058037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.058287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.058351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.058647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.058710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.058987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.059052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.059307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.059372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.059685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.059775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.060074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.060402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.060755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.060829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.061085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.061149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.061457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.061522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.061799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.061864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.062147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.062211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.062502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.062565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.062867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.062932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.063201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.063262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.063559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.063624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.063898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.063964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.064222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.064287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.064578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.064642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.064951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.065016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.065313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.065377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.675 qpair failed and we were unable to recover it. 00:25:04.675 [2024-07-12 17:14:04.065625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.675 [2024-07-12 17:14:04.065690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.065974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.066039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.066337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.066401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.066607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.066656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.066889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.066939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.067212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.067276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.067560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.067625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.067897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.067954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.068162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.068226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.068464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.068527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.068767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.068834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.069057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.069120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.069334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.069643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.069706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.069941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.069990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.070232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.070296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.070535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.070598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.070817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.070867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.071082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.071145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.071382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.071446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.071623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.071688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.071967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.072016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.072263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.072325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.072581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.072645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.072859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.072908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.073176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.073239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.074051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.074088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.074259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.074310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.074454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.074486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.074633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.074664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.074851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.074903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.075036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.075097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.075252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.075305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.075444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.676 [2024-07-12 17:14:04.075475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.676 qpair failed and we were unable to recover it. 00:25:04.676 [2024-07-12 17:14:04.075618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.075648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.075810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.075864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.076929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.076960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.077914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.077950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.078071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.078103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.078319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.078351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.078542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.078573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.078804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.078858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.078983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.079040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.079174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.079232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.079490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.079546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.079751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.079783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.079918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.079968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.080181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.080233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.080436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.080487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.080733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.080779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.080895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.080925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.081120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.081177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.081423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.081477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.081675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.081707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.081872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.081904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.082072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.082135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.082397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.082428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.082597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.082639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.082818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.082870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.083016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.083071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.083245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.083298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.083471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.083503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.083672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.083714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.083866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.083916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.677 [2024-07-12 17:14:04.084146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.677 [2024-07-12 17:14:04.084198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.677 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.084402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.084458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.084603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.084634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.084811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.084863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.084997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.085050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.085228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.085260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.085444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.085475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.085659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.085691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.085858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.085912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.086009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.086040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.086249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.086300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.086492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.086524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.086755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.086788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.086924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.086980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.087213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.087265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.087518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.087570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.087753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.087794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.087934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.087965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.088195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.088296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.088599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.088668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.088894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.088927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.089133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.089198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.089505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.089573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.089866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.089898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.090055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.090119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.090400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.090464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.090731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.090770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.090923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.090954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.091136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.091202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.091454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.091794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.091827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.091959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.091991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.092154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.092227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.092445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.092519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.092773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.092816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.092956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.092988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.093224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.093289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.093544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.093609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.093863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.093910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.094063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.094094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.094247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.094279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.678 qpair failed and we were unable to recover it. 00:25:04.678 [2024-07-12 17:14:04.094452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.678 [2024-07-12 17:14:04.094482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.094621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.094653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.094836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.094867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.094991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.095181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.095344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.095522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.095687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.095861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.095892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.096034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.096063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.096281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.096446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.096476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.096657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.096702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.096828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.096860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.097282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.097476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.097668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.097896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.097993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.098183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.098344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.098553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.098751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.098942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.098983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.099152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.099183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.099324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.099354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.099535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.099670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.099701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.099853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.099884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.100033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.100064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.100218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.100248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.100385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.100415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.100636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.100667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.100859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.100890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.101964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.679 [2024-07-12 17:14:04.101995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.679 qpair failed and we were unable to recover it. 00:25:04.679 [2024-07-12 17:14:04.102126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.102156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.102341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.102382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.102570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.102601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.102730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.102764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.102899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.102929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.103087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.103118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.103273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.103303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.103433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.103475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.103658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.103696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.103920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.103951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.104100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.104131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.104301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.104336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.104515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.104546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.104743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.104775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.104913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.104943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.105159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.105190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.105364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.105394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.105552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.105582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.105759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.105799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.105939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.105970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.106131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.106290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.106449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.106700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.106870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.106984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.107259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.107289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.107520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.107550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.107729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.107771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.107911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.107942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.108184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.108215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.108392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.108422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.108578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.108609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.108731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.108769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.108916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.108946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.109101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.109135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.109266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.109295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.680 qpair failed and we were unable to recover it. 00:25:04.680 [2024-07-12 17:14:04.109460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.680 [2024-07-12 17:14:04.109502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.109675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.109706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.109870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.109901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.110134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.110165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.110349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.110380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.110576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.110607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.110803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.110835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.110935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.110965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.111159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.111190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.111382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.111413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.111569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.111599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.111761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.111792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.111920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.111950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.112950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.112980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.113109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.113139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.113312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.113344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.113526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.113556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.113736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.113790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.113947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.113978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.114131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.114162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.114267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.114297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.114457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.114487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.114662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.114692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.114902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.114934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.115100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.115131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.115341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.115372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.115554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.115584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.115690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.115720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.115907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.115938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.116061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.116092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.116265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.116305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.116461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.116492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.116628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.681 [2024-07-12 17:14:04.116658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.681 qpair failed and we were unable to recover it. 00:25:04.681 [2024-07-12 17:14:04.116817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.116849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.117883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.117914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.118897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.118928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.119943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.119973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.120141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.120171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.120279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.120309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.120476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.120507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.120675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.120705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.120887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.120918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.121107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.121138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.121290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.121321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.121492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.121533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.121705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.121736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.121933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.121964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.122124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.122154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.122326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.122356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.122497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.122528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.122662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.122693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.122877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.122909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.123926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.123957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.682 qpair failed and we were unable to recover it. 00:25:04.682 [2024-07-12 17:14:04.124094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.682 [2024-07-12 17:14:04.124125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.124303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.124333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.124506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.124536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.124682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.124712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.124872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.124902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.125050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.125081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.125220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.125250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1229057 Killed "${NVMF_APP[@]}" "$@" 00:25:04.683 [2024-07-12 17:14:04.125362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.125393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.125583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.125625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.125760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:04.683 [2024-07-12 17:14:04.125799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.125943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:04.683 [2024-07-12 17:14:04.125973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.683 [2024-07-12 17:14:04.126145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.126175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.683 [2024-07-12 17:14:04.126317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.126347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.683 [2024-07-12 17:14:04.126515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.126545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.126691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.126722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.126863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.126894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.127929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.127960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.128911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.128946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.129091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.129121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.129286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.129317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.129423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.129463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.129661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.129692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.129892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.129924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1229636 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:04.683 [2024-07-12 17:14:04.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.130130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b9 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1229636 00:25:04.683 0 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 [2024-07-12 17:14:04.130262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.130293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1229636 ']' 00:25:04.683 [2024-07-12 17:14:04.130427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.130458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.683 qpair failed and we were unable to recover it. 00:25:04.683 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.683 [2024-07-12 17:14:04.130576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.683 [2024-07-12 17:14:04.130606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.684 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.684 [2024-07-12 17:14:04.130782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.130816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.684 [2024-07-12 17:14:04.130946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.130977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:04.684 [2024-07-12 17:14:04.131124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.131154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.131292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.131322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.131490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.131521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.131635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.131666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.131797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.131828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.131972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.132864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.132987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.133859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.133979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.134148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.134317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.134515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.134683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.134862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.134893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.135038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.135068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.135207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.135238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.135398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.135429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.135580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.135611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.684 [2024-07-12 17:14:04.135762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.684 [2024-07-12 17:14:04.135794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.684 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.135934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.135965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.136087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.136117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.136297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.136328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.136540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.136572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.136717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.136756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.136894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.136925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.137873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.137904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.138870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.138901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.139074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.139270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.139451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.139649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.139832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.139983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.140186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.140394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.140563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.140753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.140937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.140967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.141106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.141136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.141250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.141280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.141400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.141430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.141532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.141562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.685 [2024-07-12 17:14:04.141686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.685 [2024-07-12 17:14:04.141716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.685 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.141865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.142878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.142987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.143182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.143361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.143559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.143728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.143882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.143912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.144043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.144073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.144174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.144203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.144308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.144341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.144456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.144485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.144628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.144658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.145554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.145590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.145764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.145796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.145945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.145975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.146100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.146130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.146261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.146291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.146388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.146417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.146543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.146572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.146705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.146735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.147728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.147773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.147918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.147948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.148058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.148086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.148278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.148307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.148477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.148505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.148685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.686 [2024-07-12 17:14:04.148713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.686 qpair failed and we were unable to recover it. 00:25:04.686 [2024-07-12 17:14:04.148845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.148874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.148974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.149954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.149982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.150098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.150126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.150846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.150877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.151863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.151979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.152922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.153054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.153082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.153199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.153227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.153971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.154002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.154186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.154213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.154880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.154910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.155887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.155914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.156007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.156034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.156175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.156202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.156392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.156419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.156544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.156571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.687 [2024-07-12 17:14:04.156711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.687 [2024-07-12 17:14:04.156746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.687 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.156860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.156887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.156989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.157958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.158156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.158182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.158351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.158376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.158545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.158571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.158749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.158776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.158909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.158936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.159893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.159996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.160935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.160962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.688 qpair failed and we were unable to recover it. 00:25:04.688 [2024-07-12 17:14:04.161847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.688 [2024-07-12 17:14:04.161875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.162856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.162978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.163885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.163912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.164951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.164978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.165893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.165920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.166874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.166900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.167025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.167051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.167138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.167165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.167266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.167292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.689 qpair failed and we were unable to recover it. 00:25:04.689 [2024-07-12 17:14:04.167389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.689 [2024-07-12 17:14:04.167415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.167567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.167593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.167722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.167756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.167850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.167876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.168579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.168608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.168725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.168873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.168901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.169876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.169903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.170998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.171956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.171982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.172116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.172142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.690 qpair failed and we were unable to recover it. 00:25:04.690 [2024-07-12 17:14:04.172283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.690 [2024-07-12 17:14:04.172309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.172455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.172482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.172625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.172666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.172792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.172819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.172944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.172970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.173142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.173310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.173337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.173470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.173496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.173616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.173646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.173806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.173833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.174878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.174977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.175939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.175965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.176960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.176986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.177960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.178107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.178133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.691 [2024-07-12 17:14:04.178227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.691 [2024-07-12 17:14:04.178254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.691 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.178375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.178402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.178561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.178588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.178707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.178733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.178870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.178896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.178997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179511] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:25:04.692 [2024-07-12 17:14:04.179562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179576] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.692 [2024-07-12 17:14:04.179587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.179884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.179996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.180915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.180941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.181915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.181941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.182905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.182931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.183020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.183057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.183184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.183218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.183385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.183412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.183564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.183590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.692 [2024-07-12 17:14:04.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.692 [2024-07-12 17:14:04.183752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.692 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.183867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.183893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.183992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.184942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.184968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.185897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.185939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab7c000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.186906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.186932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.187083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.187205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.187230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.187357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.187383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.188971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.188997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.189957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.693 [2024-07-12 17:14:04.189983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.693 qpair failed and we were unable to recover it. 00:25:04.693 [2024-07-12 17:14:04.190118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.190882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.190984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.191867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.191893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.192936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.192962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.193961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.193987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.194953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.194979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.195101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.195127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.195219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.694 [2024-07-12 17:14:04.195245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.694 qpair failed and we were unable to recover it. 00:25:04.694 [2024-07-12 17:14:04.195365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.195403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.195542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.195568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.195657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.195683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.195812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.195839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.195941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.195967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.196915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.196941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.197934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.197960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.198054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.198080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.198214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.198244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.198370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.198401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.198542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.198732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.198775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.199512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.199540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.199691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.199730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.199868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.199894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.199987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.200013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.200165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.200190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.200327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.200352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.695 qpair failed and we were unable to recover it. 00:25:04.695 [2024-07-12 17:14:04.200492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.695 [2024-07-12 17:14:04.200517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.200662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.200687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.200838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.200864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.200964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.200990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.201944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.201970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.202857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.202883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.203011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.203051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.203170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.203197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.203332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.203372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.204337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.204366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.204537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.204563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.204684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.204724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.204873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.204899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.205869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.205899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.206919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.206945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.207090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.207115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.207226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.696 [2024-07-12 17:14:04.207267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.696 qpair failed and we were unable to recover it. 00:25:04.696 [2024-07-12 17:14:04.207375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.207401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.208334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.208368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.208557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.208598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.208761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.208802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.208904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.208931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.209972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.210942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.210968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.211908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.211934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.212856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.212882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.213003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.213044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.213698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.213752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.213871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.213896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.214556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.214582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.214728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.697 [2024-07-12 17:14:04.214759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.697 qpair failed and we were unable to recover it. 00:25:04.697 [2024-07-12 17:14:04.214876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.214901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.215904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.215930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.216952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.216978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.217947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.217973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.218909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.218935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.219056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.698 [2024-07-12 17:14:04.219211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.219354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.219566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.219716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.698 [2024-07-12 17:14:04.219848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.698 [2024-07-12 17:14:04.219874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.698 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.219968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.219994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.220137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.220178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.220808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.220839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.220974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.221170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.221335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.221546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.221721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.221876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.221902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.222882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.222908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.223944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.223969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.224065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.224794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.224824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.224915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.224943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.225920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.699 [2024-07-12 17:14:04.225946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.699 qpair failed and we were unable to recover it. 00:25:04.699 [2024-07-12 17:14:04.226047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.226881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.226999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.227881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.227907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.228866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.228893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.229902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.700 [2024-07-12 17:14:04.229928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.700 qpair failed and we were unable to recover it. 00:25:04.700 [2024-07-12 17:14:04.230016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.230950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.230976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.231901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.231927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.232052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.232078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.232220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.232260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.232391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.232432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.232529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.232555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.232669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.232694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.233554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.233583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.233749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.233777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.233872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.233899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.233996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.234870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.234896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.235879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.235904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.236067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.236107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.236218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.701 [2024-07-12 17:14:04.236258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.701 qpair failed and we were unable to recover it. 00:25:04.701 [2024-07-12 17:14:04.237137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.237304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.237477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.237628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.237789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.237915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.237941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.238920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.238947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.239043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.239068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.239717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.239752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.239880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.239907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.240914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.240945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.241922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.241947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.242053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.242078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.242225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.702 [2024-07-12 17:14:04.242343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.702 [2024-07-12 17:14:04.242368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.702 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.242499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.242524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.242698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.242723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.242852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.242878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.242972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.242997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.243940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.243967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.244865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.244891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.245910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.245936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.246962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.246988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.247951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.247977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.248133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.248158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.248294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.248334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.703 qpair failed and we were unable to recover it. 00:25:04.703 [2024-07-12 17:14:04.248467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.703 [2024-07-12 17:14:04.248492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.248611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.248636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.248776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.248803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.248903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.249027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.249053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.249187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.249212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.249379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.249419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.249623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.249648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.249831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.249857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.250890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.250915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.251084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.251288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.251488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.251682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.251859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.251987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.252146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.252354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.252515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.252707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.252894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.252921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.253885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.253926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.704 [2024-07-12 17:14:04.254071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.704 [2024-07-12 17:14:04.254096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.704 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.254218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.254246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.254397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.705 [2024-07-12 17:14:04.254410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.254448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.254597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.254621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.254782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.254809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.254973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.254999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.255156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.255319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.255484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.255684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.255808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.255981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.256953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.256980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.257875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.257901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.258875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.258901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.259027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.259053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.259216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.259254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.259434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.259459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.705 [2024-07-12 17:14:04.259674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.705 [2024-07-12 17:14:04.259698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.705 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.259812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.259843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.259984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.260149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.260347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.260511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.260678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.260874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.260899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.261923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.261949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.262968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.262994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.263942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.263968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.264108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.264133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.264294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.264333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.264499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.264539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.264667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.264706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.264852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.264878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.265027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.265207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.265392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.265605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.265819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.265977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.266003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.266177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.266201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.266353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.266378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.266564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.266588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.706 [2024-07-12 17:14:04.266704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.706 [2024-07-12 17:14:04.266750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.706 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.266886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.266912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.267074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.267099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.267243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.267267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.267449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.267474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.267665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.267689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.267886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.267920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.268069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.268093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.268276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.268301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.268461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.268486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.268629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.268668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.268842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.268868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.269940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.269965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.270151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.270176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.270374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.270398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.270550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.270585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.270783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.270827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.270973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.270998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.271129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.271154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.271324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.271348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.271506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.271541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.271723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.271768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.271910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.271935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.272098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.272122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.272278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.272302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.272483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.272507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.272685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.272709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.272873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.273049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.273074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.273302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.273326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.273450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.273490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.273628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.273653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.273849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.273876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.274065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.274090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.274293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.274317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.274448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.707 [2024-07-12 17:14:04.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.707 qpair failed and we were unable to recover it. 00:25:04.707 [2024-07-12 17:14:04.274615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.274640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.274788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.274829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.274957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.274982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.275097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.275305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.275344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.275530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.275555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.275674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.275714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.275895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.275922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.276056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.276096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.276280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.276304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.276459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.276488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.276659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.276684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.276855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.276881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.277050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.277260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.277447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.277643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.277837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.277974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.278164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.278337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.278550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.278735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.278897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.278923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.279096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.279122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.279311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.279336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.279480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.279505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.279661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.279701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.279908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.279934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.280102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.280127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.280286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.280311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.280445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.280486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.280672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.280698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.280837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.280864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.281948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.281974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.708 qpair failed and we were unable to recover it. 00:25:04.708 [2024-07-12 17:14:04.282115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.708 [2024-07-12 17:14:04.282140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.282261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.282285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.282470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.282503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.282658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.282682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.282888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.282914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.283900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.283926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.284936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.284963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.285122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.285146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.285292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.285316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.285496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.285520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.285683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.285712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.285864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.285890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.286851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.286877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.287910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.288037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.709 [2024-07-12 17:14:04.288079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.709 qpair failed and we were unable to recover it. 00:25:04.709 [2024-07-12 17:14:04.288204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.288242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.288370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.288395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.288527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.288730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.288760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.288929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.288955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.289962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.289988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.290936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.290962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.291945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.291971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.292961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.292986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.293176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.293349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.293516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.293671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.293866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.293998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.294180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.294384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.294582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.294756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.294934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.294959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.710 qpair failed and we were unable to recover it. 00:25:04.710 [2024-07-12 17:14:04.295122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.710 [2024-07-12 17:14:04.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.295300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.295325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.295500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.295524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.295644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.295668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.295857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.295883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.295995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.296182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.296371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.296587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.296745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.296902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.296929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.297100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.297139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.297304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.297328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.297468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.297508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.297660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.297683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.297859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.297887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.298911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.298936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.299858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.299884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.300972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.300998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.301174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.301198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.301344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.301368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.301548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.301573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.301732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.301763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.301941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.711 [2024-07-12 17:14:04.302066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.711 [2024-07-12 17:14:04.302092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.711 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.302227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.302252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.302354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.302379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.302517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.302541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.302703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.302750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.302866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.302892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.303851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.303995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.304194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.304351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.304518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.304691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.304875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.304901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.305902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.305997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.306867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.306990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.307133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.307342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.307528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.307695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.307891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.307918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.308080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.308291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.308438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.308622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.712 [2024-07-12 17:14:04.308803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.712 qpair failed and we were unable to recover it. 00:25:04.712 [2024-07-12 17:14:04.308953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.308979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.309952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.309991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.310938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.310964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.311900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.311926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.312925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.312951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.313077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.313116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.313289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.313313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.313491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.313515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.313667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.313692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.313859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.313886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.314933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.314959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.315097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.713 [2024-07-12 17:14:04.315122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.713 qpair failed and we were unable to recover it. 00:25:04.713 [2024-07-12 17:14:04.315234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.315258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.315439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.315464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.315638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.315828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.315854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.315991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.316186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.316347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.316520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.316705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.316905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.316931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.317946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.317972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.318893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.319934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.319959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.320860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.320886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.321908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.321934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.322053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.322246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.322270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.322407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.322433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.714 qpair failed and we were unable to recover it. 00:25:04.714 [2024-07-12 17:14:04.322577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.714 [2024-07-12 17:14:04.322616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.322743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.322783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.322908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.322947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.323909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.323935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.324941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.325923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.325949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.326972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.327139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.327164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.327308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.327332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.327500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.327525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.327637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.327662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.327824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.327851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.328946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.328972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.329102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.329126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.329247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.329271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.329448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.329472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.329619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.329644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.715 [2024-07-12 17:14:04.329764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.715 [2024-07-12 17:14:04.329790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.715 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.329927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.330958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.330987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.331937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.331963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.332955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.332981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.333965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.333990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.334923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.334947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.335142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.335305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.335499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.335656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.335854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.335979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.336953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.716 qpair failed and we were unable to recover it. 00:25:04.716 [2024-07-12 17:14:04.337117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.716 [2024-07-12 17:14:04.337142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.337247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.337271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.337432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.337471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.337621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.337649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.337802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.337843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.337971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.337997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.338971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.338996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.339938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.340061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.340272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.340428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.340614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.340821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.340985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.341150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.341337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.341508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.341672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.341892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.341918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.342874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.342900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.343044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.343069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.343232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.343272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.343390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.343414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.343590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.343614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.717 qpair failed and we were unable to recover it. 00:25:04.717 [2024-07-12 17:14:04.343759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.717 [2024-07-12 17:14:04.343785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.343909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.343934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.344894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.344986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.345828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.345990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.346894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.346983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.347895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.347935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.348095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.348119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.348258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.348283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.348447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.348473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.348601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.348626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.718 qpair failed and we were unable to recover it. 00:25:04.718 [2024-07-12 17:14:04.348728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.718 [2024-07-12 17:14:04.348761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.348849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.348875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.349867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.349893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.350010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.719 [2024-07-12 17:14:04.350036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:04.719 qpair failed and we were unable to recover it. 00:25:04.719 [2024-07-12 17:14:04.350185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.001 [2024-07-12 17:14:04.350211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.001 qpair failed and we were unable to recover it. 00:25:05.001 [2024-07-12 17:14:04.350361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.350387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.350513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.350539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.350668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.350693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.350874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.350905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.351853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.351880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.352908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.352934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.353929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.353955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.354147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.354178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.354295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.354321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.354461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.354641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.354667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.354834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.354861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.355947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.355973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.356129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.356154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.356283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.356308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.356464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.356489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.356635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.356675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.356883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.002 [2024-07-12 17:14:04.356920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.002 qpair failed and we were unable to recover it. 00:25:05.002 [2024-07-12 17:14:04.357047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.357072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.357197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.357222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.357375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.357414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.357639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.357676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.357855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.357881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.358821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.358992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.359154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.359313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.359509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.359643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.359846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.359888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.360885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.360911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.361933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.362081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.362119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.362287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.362311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.362480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.362724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.362754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.362923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.362949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.363104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.363128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.363345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.363369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.363544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.363731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.363777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.363925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.003 [2024-07-12 17:14:04.363950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.003 qpair failed and we were unable to recover it. 00:25:05.003 [2024-07-12 17:14:04.364116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.364325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.364516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.364662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.364798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.364950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.364977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.365920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.366061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.366087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.366219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.366244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.366390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.366415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.366606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.366630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.366838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.366875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.367966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.367990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.368213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.368238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.368393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.368416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.368550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.368574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.368749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.368788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.368907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.368933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.369051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.369077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.369216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.369240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.369383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.369422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.369612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.369636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.369797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.369824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.370040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.370083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.370224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.370248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.370388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.370412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.370620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.370652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.370825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.370851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.004 [2024-07-12 17:14:04.371044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.004 [2024-07-12 17:14:04.371068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.004 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.371242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.371266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.371447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.371472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.371668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.371692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.371845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.371870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.372898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.372939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.373117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.373352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.373514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.373679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.373826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.373976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.374203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.374327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.374474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.374650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.374868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.374902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.375054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.375079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.375277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.375302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.375486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.375519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.375714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.375771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.375920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.375945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.376137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.376373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.376397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.376586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.376622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.376848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.376884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.377915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.377940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.378075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.378115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.378270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.378294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.005 [2024-07-12 17:14:04.378459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.005 [2024-07-12 17:14:04.378483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.005 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.378672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.378858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.378883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.379883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.379908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.380090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.380114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.380312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.380336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.380519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.380553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.380786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.380820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.380949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.380975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.381079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.381114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.381316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.381345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.381498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.381688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.381711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.381931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.381969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.382096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.382135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.382292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.382316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.382485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.382509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.382729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.382788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.382918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.382944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.383255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.383278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.383392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.383416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.383578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.383602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.383826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.006 [2024-07-12 17:14:04.383852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.006 qpair failed and we were unable to recover it. 00:25:05.006 [2024-07-12 17:14:04.383999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.384921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.384947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.385938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.385964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.386861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.386887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.387907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.387933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.388842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.388980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.389096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.389244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.389413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.389558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.007 qpair failed and we were unable to recover it. 00:25:05.007 [2024-07-12 17:14:04.389706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.007 [2024-07-12 17:14:04.389731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.389807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.008 [2024-07-12 17:14:04.389841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.008 [2024-07-12 17:14:04.389857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.008 [2024-07-12 17:14:04.389869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.008 [2024-07-12 17:14:04.389866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.389880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.008 [2024-07-12 17:14:04.389890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.389955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:05.008 [2024-07-12 17:14:04.390021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.389991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:05.008 [2024-07-12 17:14:04.390039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:05.008 [2024-07-12 17:14:04.390042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:05.008 [2024-07-12 17:14:04.390166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.390309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.390429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.390597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.390711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.390969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.390996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.391848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.391875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.392851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.392875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.393874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.393994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.394113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.394139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.394263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.394289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.394465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.008 [2024-07-12 17:14:04.394586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.008 [2024-07-12 17:14:04.394612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.008 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.394705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.394731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.394867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.394893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.395889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.395982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.396947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.396973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.397940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.397966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.398957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.398983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-07-12 17:14:04.399751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.009 [2024-07-12 17:14:04.399777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.399902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.399927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.400864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.400889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.401904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.401930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.402958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.402984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.403938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.403963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.404908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.404933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.405030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.405056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.405201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.405226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.405352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.405378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.405468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.405493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-07-12 17:14:04.405612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.010 [2024-07-12 17:14:04.405637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.405756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.405783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.405875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.405901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.405995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.406952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.406979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.407867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.407986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.408875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.408994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.409953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.409979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.410879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.410974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.411910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.411936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.412028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.412145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.011 [2024-07-12 17:14:04.412171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-07-12 17:14:04.412294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.412320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.412438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.412464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.412606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.412631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.412766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.412792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.412889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.412915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.413970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.413995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.414863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.414986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.415943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.416864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.416890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.417958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.417984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.418900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.418926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.419902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.419928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.420014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.012 [2024-07-12 17:14:04.420040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.012 qpair failed and we were unable to recover it. 00:25:05.012 [2024-07-12 17:14:04.420185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.420305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.420477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.420623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.420773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.420916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.420942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.421892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.421919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.422902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.422928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.423926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.423951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.424800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.424826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.425993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.426924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.426950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.427092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.427211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.427321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.427473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.013 [2024-07-12 17:14:04.427621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.013 qpair failed and we were unable to recover it. 00:25:05.013 [2024-07-12 17:14:04.427751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.427778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.427872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.427898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.427993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.428840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.428985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.429880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.429978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.430914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.431970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.431996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.432841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.432976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.433918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.433944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.434082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.434196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.434222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.434320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.434345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.434467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.434493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.014 [2024-07-12 17:14:04.434585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.014 [2024-07-12 17:14:04.434611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.014 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.434764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.434793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.434945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.435925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.435971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.436867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.436996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.437918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.437948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.438885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.438911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.439951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.439977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.440888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.440914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.441981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.442007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.442132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.442158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.442305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.015 [2024-07-12 17:14:04.442331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.015 qpair failed and we were unable to recover it. 00:25:05.015 [2024-07-12 17:14:04.442465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.442491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.442587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.442612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.442734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.442766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.442855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.442881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.442997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.443877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.443904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.444893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.444995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.445888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.445980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.446972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.446997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.447947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.447973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.448877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.448904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.016 [2024-07-12 17:14:04.449912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.016 qpair failed and we were unable to recover it. 00:25:05.016 [2024-07-12 17:14:04.450038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.450910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.450936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.451890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.451916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.452894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.452919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.453940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.453966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.454830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.454978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.455902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.455929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.017 [2024-07-12 17:14:04.456714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.017 [2024-07-12 17:14:04.456745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.017 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.456871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.456897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.457892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.458882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.458911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.459935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.459961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.460935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.460961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.461972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.461998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.462883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.462909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.463839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.018 [2024-07-12 17:14:04.463988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.018 [2024-07-12 17:14:04.464014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.018 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.464947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.464973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.465835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.465861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.466916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.466942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.467902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.467996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.468899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.468925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.469917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.469942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.470967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.471865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.019 [2024-07-12 17:14:04.471984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.019 [2024-07-12 17:14:04.472010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.019 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.472868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.472894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.473880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.473906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.474889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.474915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.475935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.475961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.476961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.476987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.477890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.477916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.478094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.478119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.478277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.478526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.478563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.478697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.478762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.478912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.478938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.479099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.479123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.479342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.479367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.020 [2024-07-12 17:14:04.479529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.020 [2024-07-12 17:14:04.479554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.020 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.479745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.479771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.479945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.479971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.480964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.480990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.481119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.481159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.481347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.481372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.481521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.481559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.481742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.481768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.481913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.481939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.482091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.482131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.482292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.482317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.482474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.482500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.482694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.482735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.482887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.482913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.483840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.483866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.484898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.484924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.485942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.486836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.486992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.487226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.487385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.487540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.487732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.487888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.487914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.021 [2024-07-12 17:14:04.488899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.021 [2024-07-12 17:14:04.488925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.021 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.489074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.489114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.489308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.489333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.489516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.489551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.489703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.489728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.489889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.489916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.490877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.490903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.491056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.491081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.491192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.491218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.491442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.491479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.491602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.491628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.491796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.491823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.492904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.492931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.493085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.493272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.493298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.493460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.493486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.493698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.493732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.493890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.493915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.494851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.494978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.495882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.495987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.496751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.496979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.497973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.497999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.022 qpair failed and we were unable to recover it. 00:25:05.022 [2024-07-12 17:14:04.498151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.022 [2024-07-12 17:14:04.498177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.498354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.498390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.498508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.498533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.498685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.498711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.498907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.498933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.499872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.499899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.500889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.500915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.501068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.501093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.501202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.501228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.501405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.501431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.501585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.501611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.501791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.501817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.502038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.502202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.502399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.502579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.502786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.502974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.503138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.503354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.503522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.503668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.503836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.503863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.504896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.504922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.505821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.505848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.506896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.506985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.507012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.507166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.023 [2024-07-12 17:14:04.507192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.023 qpair failed and we were unable to recover it. 00:25:05.023 [2024-07-12 17:14:04.507336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.507362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.507550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.507576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.507717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.507756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.507879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.507905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.508926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.508952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.509129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.509359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.509519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.509782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.509938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.509992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b65ae0 (9): Bad file descriptor 00:25:05.024 [2024-07-12 17:14:04.510225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.510267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.510425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.510453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.510600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.510626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.510752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.510779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.510905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.510931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.511934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.511960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.512965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.512991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.513109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.513135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.513257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.513283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.513420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.513446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.513688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.513714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.513926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.513952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.514874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.514902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.515047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.515073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.515292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.515330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.515456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.515482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.515615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.515641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.024 qpair failed and we were unable to recover it. 00:25:05.024 [2024-07-12 17:14:04.515814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.024 [2024-07-12 17:14:04.515854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.515969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.515996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.516146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.516173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.516311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.516337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.516471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.516498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.516633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.516659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.516786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.516827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.517920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.517947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.518833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.518983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.519922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.519948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.520123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.520149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.520350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.520376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.520525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.520551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.520683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.520723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.520866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.520895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.025 [2024-07-12 17:14:04.521026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.521052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:05.025 [2024-07-12 17:14:04.521147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.521173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.521259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.025 [2024-07-12 17:14:04.521285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:05.025 [2024-07-12 17:14:04.521440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.521466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.025 [2024-07-12 17:14:04.521637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.521664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.521839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.521866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.522946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.522982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.523913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.523939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.524028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.524054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.524179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.025 [2024-07-12 17:14:04.524205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.025 qpair failed and we were unable to recover it. 00:25:05.025 [2024-07-12 17:14:04.524354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.524381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.524478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.524503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.524635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.524661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.524804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.524951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.524977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.525853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.525880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.526938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.526964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.527876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.527990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.528825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.528866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.529862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.529981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.530942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.530968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.531884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.531915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.532010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.026 [2024-07-12 17:14:04.532036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.026 qpair failed and we were unable to recover it. 00:25:05.026 [2024-07-12 17:14:04.532155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.532953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.532980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.533965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.533991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.534922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.534950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.535886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.535914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.536948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.536974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.537950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.537977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.538069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.538096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.538231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.538258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.538384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.538411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.538537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.538564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 [2024-07-12 17:14:04.538689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.027 [2024-07-12 17:14:04.538716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.027 qpair failed and we were unable to recover it. 00:25:05.027 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.027 [2024-07-12 17:14:04.538814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.538842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.538932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.538958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.028 [2024-07-12 17:14:04.539199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.028 [2024-07-12 17:14:04.539226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.539878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.539905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.540932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.540958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.541948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.541974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.542883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.542924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.543879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.543906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.544856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.544882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.545901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.545927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.546023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.546048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.546227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.546253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.028 qpair failed and we were unable to recover it. 00:25:05.028 [2024-07-12 17:14:04.546454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.028 [2024-07-12 17:14:04.546480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.546632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.546658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.546804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.546830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.546930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.546956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.547907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.547933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.548933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.548960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.549111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.549136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.549284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.549310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.549530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.549556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.549757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.549784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.549877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.549903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.550024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.550050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.550259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.550296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.550593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.550620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.550780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.550806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.550916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.550942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.551910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.551994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.552972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.552998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.553182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.553216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.553442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.553476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.553564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.553590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.553701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.553727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.553860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.553886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.554881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.554979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.555005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.555120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.555147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.555290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.555316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.029 qpair failed and we were unable to recover it. 00:25:05.029 [2024-07-12 17:14:04.555501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.029 [2024-07-12 17:14:04.555537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.555671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.555697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.555861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.555906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b68ea0 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.556897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.556924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.557900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.557926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.558961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.558987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.559943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.559969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.560943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.560969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.561102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.561128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.561374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.561408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.561525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.561551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 Malloc0 00:25:05.030 [2024-07-12 17:14:04.561699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.561725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.561846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.561872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.562018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.562184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.030 [2024-07-12 17:14:04.562365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.030 [2024-07-12 17:14:04.562516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.562653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.562805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.562927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.562953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.563077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.030 [2024-07-12 17:14:04.563103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.030 qpair failed and we were unable to recover it. 00:25:05.030 [2024-07-12 17:14:04.563243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.563269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.563413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.563439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.563564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.563589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.563797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.563823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.563908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.563934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.564914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.564941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.031 [2024-07-12 17:14:04.565384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.565922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.565948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.566946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.566972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.567182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.567216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.567370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.567406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.567524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.567550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.567736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.567772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.567907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.567933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.568161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.568333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.568523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.568675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.568825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.568970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.569904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.569930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.570831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.570987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.031 [2024-07-12 17:14:04.571926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.031 [2024-07-12 17:14:04.571952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.031 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.572038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.572065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.572207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.572234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.572385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.572411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.572549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.572576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.572835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.572876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.573011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.573048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.573199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.573226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.573394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.573420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.573583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.032 [2024-07-12 17:14:04.573608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.032 [2024-07-12 17:14:04.573819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.573845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.032 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.032 [2024-07-12 17:14:04.573983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.574830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.574986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.575832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.575980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.576871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.576906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.577110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.577308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.577494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.577698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.577888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.577997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.578841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.578993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab6c000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.579891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.579917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.580063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.580089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.580223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.580249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.580390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.580416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.032 [2024-07-12 17:14:04.580531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.032 [2024-07-12 17:14:04.580557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.032 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.580673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.580699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.580840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.580872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.581038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.581064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.581264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.581290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.581424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.581450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.033 [2024-07-12 17:14:04.581614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.581640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.033 [2024-07-12 17:14:04.581799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.581826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.033 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.033 [2024-07-12 17:14:04.582012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.582894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.582920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.583955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.583980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.584883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.584994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.585858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.585995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.586919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.586945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.587256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.587281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.587535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.587561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.587711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.587750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.587984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.588017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.588143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.588169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.588293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.588320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.588449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.033 [2024-07-12 17:14:04.588475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.033 qpair failed and we were unable to recover it. 00:25:05.033 [2024-07-12 17:14:04.588660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.588686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.588851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.588878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.589086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.589283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.589511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.034 [2024-07-12 17:14:04.589650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.034 [2024-07-12 17:14:04.589827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.034 [2024-07-12 17:14:04.589943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.589968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.034 [2024-07-12 17:14:04.590091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.590201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.590384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.590580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.590743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.590870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.590896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.591084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.591110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.591270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.591296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.591485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.591522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.591744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.591781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.591884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.591909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.592898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.592993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.593019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.593165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.593191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.593338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.034 [2024-07-12 17:14:04.593363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab74000b90 with addr=10.0.0.2, port=4420 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.593558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.034 [2024-07-12 17:14:04.596092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.596216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.596244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.596260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.596273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.596307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.034 17:14:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1229214 00:25:05.034 [2024-07-12 17:14:04.605934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.606027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.606055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.606070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.606088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.606119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.615979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.616081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.616110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.616126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.616138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.616169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.625907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.626008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.626047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.626078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.626099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.626129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.635936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.636069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.636097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.636112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.636124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.636154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.645942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.646035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.646062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.646078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.646091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.646122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.655947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.656042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.656071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.656086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.656098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.034 [2024-07-12 17:14:04.656128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.034 qpair failed and we were unable to recover it. 00:25:05.034 [2024-07-12 17:14:04.666065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.034 [2024-07-12 17:14:04.666160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.034 [2024-07-12 17:14:04.666187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.034 [2024-07-12 17:14:04.666202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.034 [2024-07-12 17:14:04.666215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.035 [2024-07-12 17:14:04.666245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.035 qpair failed and we were unable to recover it. 00:25:05.293 [2024-07-12 17:14:04.676113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.293 [2024-07-12 17:14:04.676247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.293 [2024-07-12 17:14:04.676275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.293 [2024-07-12 17:14:04.676291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.293 [2024-07-12 17:14:04.676303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.293 [2024-07-12 17:14:04.676332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.293 qpair failed and we were unable to recover it. 00:25:05.293 [2024-07-12 17:14:04.686053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.293 [2024-07-12 17:14:04.686152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.293 [2024-07-12 17:14:04.686179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.293 [2024-07-12 17:14:04.686194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.686206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.686236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.696142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.696242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.696269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.696284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.696301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.696331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.706143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.706236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.706262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.706277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.706290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.706319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.716208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.716299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.716326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.716341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.716353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.716382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.726198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.726284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.726310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.726325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.726337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.726366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.736300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.736392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.736418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.736432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.736445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.736473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.746281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.746377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.746403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.746418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.746430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.746459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.756296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.756387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.756414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.756429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.756441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.756469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.766369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.766476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.766502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.766517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.766530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.766558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.776362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.776451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.776477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.776491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.776503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.776532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.786312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.786401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.786426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.786447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.786459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.786488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.796397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.796487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.796514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.796529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.796542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.796571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.806439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.806541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.806567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.806581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.806594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.806622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.816490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.816588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.816614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.816629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.294 [2024-07-12 17:14:04.816641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.294 [2024-07-12 17:14:04.816669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.294 qpair failed and we were unable to recover it. 00:25:05.294 [2024-07-12 17:14:04.826490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.294 [2024-07-12 17:14:04.826631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.294 [2024-07-12 17:14:04.826657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.294 [2024-07-12 17:14:04.826671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.826684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.826745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.836541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.836627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.836652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.836667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.836680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.836708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.846525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.846621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.846647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.846662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.846674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.846703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.856562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.856647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.856673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.856688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.856700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.856751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.866602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.866693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.866732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.866756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.866768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.866799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.876624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.876745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.876800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.876818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.876830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.876872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.886649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.886761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.886786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.886802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.886815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.886845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.896681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.896794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.896820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.896835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.896848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.896877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.906708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.906833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.906859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.906875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.906888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.906918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.916786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.916906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.916932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.916947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.916960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.917007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.926764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.926859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.926885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.926900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.926913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.926944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.936796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.936882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.936909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.936925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.936937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.936967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.946831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.946963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.946990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.947005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.947017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.947047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.956835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.956925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.956952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.956967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.956979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.295 [2024-07-12 17:14:04.957010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.295 qpair failed and we were unable to recover it. 00:25:05.295 [2024-07-12 17:14:04.966837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.295 [2024-07-12 17:14:04.966925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.295 [2024-07-12 17:14:04.966956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.295 [2024-07-12 17:14:04.966972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.295 [2024-07-12 17:14:04.966985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.296 [2024-07-12 17:14:04.967015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.296 [2024-07-12 17:14:04.976892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.296 [2024-07-12 17:14:04.976983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.296 [2024-07-12 17:14:04.977010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.296 [2024-07-12 17:14:04.977025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.296 [2024-07-12 17:14:04.977052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.296 [2024-07-12 17:14:04.977081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.296 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:04.986957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:04.987053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:04.987079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:04.987094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:04.987106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:04.987136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:04.996912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:04.997011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:04.997037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:04.997052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:04.997064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:04.997109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.007000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.007101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.007126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.007141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.007154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.007190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.017043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.017177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.017203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.017217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.017229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.017259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.027128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.027272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.027298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.027313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.027325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.027365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.037083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.037170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.037197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.037212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.037224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.037252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.047107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.047200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.047226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.047241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.047253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.047282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.057134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.057226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.057251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.057266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.057278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.057307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.067151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.067292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.067317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.067332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.067356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.067385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.077168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.077273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.077299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.077313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.077325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.077354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.087244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.087333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.555 [2024-07-12 17:14:05.087360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.555 [2024-07-12 17:14:05.087375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.555 [2024-07-12 17:14:05.087387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.555 [2024-07-12 17:14:05.087416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.555 qpair failed and we were unable to recover it. 00:25:05.555 [2024-07-12 17:14:05.097318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.555 [2024-07-12 17:14:05.097413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.097438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.097454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.097471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.097501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.107252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.107344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.107369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.107384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.107396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.107425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.117312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.117403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.117429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.117444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.117456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.117485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.127341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.127467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.127492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.127507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.127519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.127548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.137348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.137436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.137460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.137474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.137487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.137515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.147370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.147459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.147484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.147499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.147512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.147541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.157396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.157484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.157510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.157525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.157537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.157566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.167446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.167541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.167566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.167581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.167593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.167622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.177465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.177553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.177579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.177593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.177605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.177634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.187560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.187653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.187677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.187697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.187709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.187761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.197585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.197676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.197701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.197731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.197753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.197785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.207546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.207634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.207659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.207674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.207687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.207716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.217543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.217629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.217653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.217668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.217680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.217709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.227631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.227749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.227776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.556 [2024-07-12 17:14:05.227792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.556 [2024-07-12 17:14:05.227804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.556 [2024-07-12 17:14:05.227835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.556 qpair failed and we were unable to recover it. 00:25:05.556 [2024-07-12 17:14:05.237626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.556 [2024-07-12 17:14:05.237751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.556 [2024-07-12 17:14:05.237776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.557 [2024-07-12 17:14:05.237792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.557 [2024-07-12 17:14:05.237805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.557 [2024-07-12 17:14:05.237835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.557 qpair failed and we were unable to recover it. 00:25:05.557 [2024-07-12 17:14:05.247626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.557 [2024-07-12 17:14:05.247729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.557 [2024-07-12 17:14:05.247763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.557 [2024-07-12 17:14:05.247779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.557 [2024-07-12 17:14:05.247792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.557 [2024-07-12 17:14:05.247822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.557 qpair failed and we were unable to recover it. 00:25:05.815 [2024-07-12 17:14:05.257664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.815 [2024-07-12 17:14:05.257772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.815 [2024-07-12 17:14:05.257798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.815 [2024-07-12 17:14:05.257814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.815 [2024-07-12 17:14:05.257827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.815 [2024-07-12 17:14:05.257857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.815 qpair failed and we were unable to recover it. 00:25:05.815 [2024-07-12 17:14:05.267692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.815 [2024-07-12 17:14:05.267829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.815 [2024-07-12 17:14:05.267857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.815 [2024-07-12 17:14:05.267872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.815 [2024-07-12 17:14:05.267885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.815 [2024-07-12 17:14:05.267915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.815 qpair failed and we were unable to recover it. 00:25:05.815 [2024-07-12 17:14:05.277713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.815 [2024-07-12 17:14:05.277836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.815 [2024-07-12 17:14:05.277868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.815 [2024-07-12 17:14:05.277884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.815 [2024-07-12 17:14:05.277897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.815 [2024-07-12 17:14:05.277928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.815 qpair failed and we were unable to recover it. 00:25:05.815 [2024-07-12 17:14:05.287760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.815 [2024-07-12 17:14:05.287850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.815 [2024-07-12 17:14:05.287882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.815 [2024-07-12 17:14:05.287898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.815 [2024-07-12 17:14:05.287911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.287941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.297786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.297874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.297899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.297914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.297927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.297958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.307889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.307988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.308015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.308030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.308058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.308088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.317848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.317965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.317992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.318007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.318020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.318071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.327891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.327981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.328005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.328035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.328048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.328077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.337891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.337979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.338004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.338034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.338048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.338078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.347970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.348113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.348139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.348155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.348167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.348196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.357996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.358089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.358130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.358145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.358157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.358187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.368069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.368195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.368226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.368241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.368254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.368283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.378013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.378124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.378149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.378164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.378176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.378205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.388081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.388200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.388225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.388239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.388252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.388282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.398216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.398326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.398351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.398367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.398379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.398407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.408159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.408277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.408303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.408318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.408330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.408364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.418314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.418409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.418434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.418449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.418461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.418490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.428249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.816 [2024-07-12 17:14:05.428343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.816 [2024-07-12 17:14:05.428367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.816 [2024-07-12 17:14:05.428381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.816 [2024-07-12 17:14:05.428393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.816 [2024-07-12 17:14:05.428422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.816 qpair failed and we were unable to recover it. 00:25:05.816 [2024-07-12 17:14:05.438240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.438334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.438360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.438375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.438387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.438416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.448246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.448364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.448390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.448405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.448418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.448446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.458280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.458374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.458403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.458418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.458431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.458460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.468317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.468420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.468446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.468460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.468474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.468503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.478331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.478432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.478458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.478473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.478485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.478514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.488353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.488448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.488472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.488488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.488500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.488528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.498386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.498472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.498496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.498511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.498528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.498558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:05.817 [2024-07-12 17:14:05.508448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.817 [2024-07-12 17:14:05.508565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.817 [2024-07-12 17:14:05.508591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.817 [2024-07-12 17:14:05.508605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.817 [2024-07-12 17:14:05.508618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:05.817 [2024-07-12 17:14:05.508648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.817 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.518448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.518532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.518559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.518574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.518586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:06.076 [2024-07-12 17:14:05.518615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.528529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.528623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.528653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.528669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.528681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.528712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.538556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.538642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.538668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.538683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.538695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.538748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.548548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.548645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.548671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.548686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.548698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.548728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.558554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.558641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.558666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.558680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.558692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.558722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.568599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.568744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.568785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.568801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.568814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.568844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.578601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.578688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.578714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.578728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.578768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.578801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.588664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.588782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.588808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.076 [2024-07-12 17:14:05.588829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.076 [2024-07-12 17:14:05.588852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.076 [2024-07-12 17:14:05.588883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.076 qpair failed and we were unable to recover it. 00:25:06.076 [2024-07-12 17:14:05.598653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.076 [2024-07-12 17:14:05.598785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.076 [2024-07-12 17:14:05.598811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.598826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.598839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.598869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.608771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.608866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.608892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.608907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.608921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.608952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.618710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.618860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.618887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.618902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.618915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.618946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.628774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.628873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.628899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.628915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.628927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.628958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.638785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.638880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.638905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.638920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.638934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.638965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.648819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.648913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.648939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.648954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.648966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.648998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.658861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.658949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.658975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.658991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.659004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.659049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.668886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.668981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.669006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.669036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.669049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.669079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.678920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.679035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.679059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.679079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.679092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.679121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.688936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.689043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.689068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.689083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.689096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.689125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.699011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.699160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.699187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.699210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.699223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.699253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.709007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.709119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.709143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.709157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.709186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.709216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.719060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.719156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.719180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.719195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.719207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.719237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.729101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.729200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.729225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.729239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.729252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.729281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.077 qpair failed and we were unable to recover it. 00:25:06.077 [2024-07-12 17:14:05.739079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.077 [2024-07-12 17:14:05.739211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.077 [2024-07-12 17:14:05.739238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.077 [2024-07-12 17:14:05.739254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.077 [2024-07-12 17:14:05.739266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.077 [2024-07-12 17:14:05.739306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.078 qpair failed and we were unable to recover it. 00:25:06.078 [2024-07-12 17:14:05.749144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.078 [2024-07-12 17:14:05.749237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.078 [2024-07-12 17:14:05.749261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.078 [2024-07-12 17:14:05.749276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.078 [2024-07-12 17:14:05.749289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.078 [2024-07-12 17:14:05.749319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.078 qpair failed and we were unable to recover it. 00:25:06.078 [2024-07-12 17:14:05.759099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.078 [2024-07-12 17:14:05.759192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.078 [2024-07-12 17:14:05.759217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.078 [2024-07-12 17:14:05.759233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.078 [2024-07-12 17:14:05.759246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.078 [2024-07-12 17:14:05.759275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.078 qpair failed and we were unable to recover it. 00:25:06.078 [2024-07-12 17:14:05.769115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.078 [2024-07-12 17:14:05.769206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.078 [2024-07-12 17:14:05.769240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.078 [2024-07-12 17:14:05.769257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.078 [2024-07-12 17:14:05.769270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.078 [2024-07-12 17:14:05.769301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.078 qpair failed and we were unable to recover it. 00:25:06.336 [2024-07-12 17:14:05.779233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.336 [2024-07-12 17:14:05.779321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.336 [2024-07-12 17:14:05.779346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.336 [2024-07-12 17:14:05.779361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.336 [2024-07-12 17:14:05.779374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.336 [2024-07-12 17:14:05.779403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.336 qpair failed and we were unable to recover it. 00:25:06.336 [2024-07-12 17:14:05.789229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.789331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.789355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.789370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.789382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.789412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.799309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.799443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.799469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.799484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.799496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.799525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.809344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.809451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.809476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.809492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.809504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.809542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.819285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.819398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.819422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.819436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.819449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.819479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.829306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.829428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.829454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.829470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.829482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.829511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.839400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.839488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.839512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.839527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.839540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.839568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.849372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.849458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.849482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.849497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.849510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.849539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.859424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.859512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.859542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.859558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.859570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.859601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.869460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.869551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.869575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.869590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.869602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.869631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.879461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.879554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.879578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.879592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.879605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.879633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.889472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.889559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.889583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.337 [2024-07-12 17:14:05.889597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.337 [2024-07-12 17:14:05.889610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.337 [2024-07-12 17:14:05.889640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.337 qpair failed and we were unable to recover it. 00:25:06.337 [2024-07-12 17:14:05.899487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.337 [2024-07-12 17:14:05.899610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.337 [2024-07-12 17:14:05.899636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.899651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.899668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.899698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.909542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.909634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.909659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.909674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.909687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.909717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.919553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.919643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.919667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.919681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.919694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.919745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.929658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.929767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.929792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.929807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.929820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.929850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.939676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.939807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.939833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.939849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.939861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.939892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.949763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.949867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.949892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.949917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.949930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.949961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.959723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.959823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.959847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.959862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.959875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.959905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.969714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.969828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.969853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.969868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.969880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.969911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.979835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.979978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.980005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.980020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.980054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.980085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.989817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.989943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.989970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.989990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.990004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:05.990045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:05.999831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:05.999933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:05.999960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:05.999975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:05.999988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:06.000044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:06.009858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:06.009954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:06.009979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:06.009994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:06.010007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:06.010063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:06.019886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.338 [2024-07-12 17:14:06.019981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.338 [2024-07-12 17:14:06.020006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.338 [2024-07-12 17:14:06.020034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.338 [2024-07-12 17:14:06.020047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.338 [2024-07-12 17:14:06.020076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.338 qpair failed and we were unable to recover it. 00:25:06.338 [2024-07-12 17:14:06.029927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.596 [2024-07-12 17:14:06.030024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.596 [2024-07-12 17:14:06.030049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.596 [2024-07-12 17:14:06.030064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.596 [2024-07-12 17:14:06.030077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.596 [2024-07-12 17:14:06.030109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.596 qpair failed and we were unable to recover it. 00:25:06.596 [2024-07-12 17:14:06.039998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.596 [2024-07-12 17:14:06.040162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.040187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.040201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.040213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.040244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.049989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.050090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.050114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.050128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.050140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.050171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.060006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.060108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.060133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.060148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.060160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.060190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.070074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.070169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.070193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.070208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.070220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.070249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.080071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.080162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.080186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.080206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.080219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.080248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.090093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.090200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.090235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.090250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.090263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.090293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.100151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.100246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.100270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.100284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.100296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.100326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.110149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.110244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.110270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.110284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.110297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.110326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.120216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.120306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.120331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.120345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.120357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.120388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.130221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.130312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.130337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.130351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.130363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.130393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.140241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.140339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.140363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.140377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.140390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.140420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.150292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.150401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.150426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.150440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.150453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.150483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.160335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.160444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.160469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.160483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.160496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.160525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.170330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.170463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.170494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.170510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.170522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.170551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.180346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.180438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.180462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.180476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.180489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.180518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.190395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.190495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.190520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.190535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.190547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.190579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.200412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.200520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.200545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.200560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.200572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.200602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.210489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.210579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.210604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.210619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.210631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.210666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.220418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.220503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.220529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.220544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.220556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.220587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.230535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.230629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.597 [2024-07-12 17:14:06.230653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.597 [2024-07-12 17:14:06.230668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.597 [2024-07-12 17:14:06.230679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.597 [2024-07-12 17:14:06.230708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.597 qpair failed and we were unable to recover it. 00:25:06.597 [2024-07-12 17:14:06.240487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.597 [2024-07-12 17:14:06.240580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.598 [2024-07-12 17:14:06.240604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.598 [2024-07-12 17:14:06.240619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.598 [2024-07-12 17:14:06.240630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.598 [2024-07-12 17:14:06.240661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.598 qpair failed and we were unable to recover it. 00:25:06.598 [2024-07-12 17:14:06.250518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.598 [2024-07-12 17:14:06.250602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.598 [2024-07-12 17:14:06.250627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.598 [2024-07-12 17:14:06.250641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.598 [2024-07-12 17:14:06.250653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.598 [2024-07-12 17:14:06.250682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.598 qpair failed and we were unable to recover it. 00:25:06.598 [2024-07-12 17:14:06.260563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.598 [2024-07-12 17:14:06.260668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.598 [2024-07-12 17:14:06.260699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.598 [2024-07-12 17:14:06.260715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.598 [2024-07-12 17:14:06.260750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.598 [2024-07-12 17:14:06.260782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.598 qpair failed and we were unable to recover it. 00:25:06.598 [2024-07-12 17:14:06.270576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.598 [2024-07-12 17:14:06.270711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.598 [2024-07-12 17:14:06.270762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.598 [2024-07-12 17:14:06.270779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.598 [2024-07-12 17:14:06.270791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.598 [2024-07-12 17:14:06.270822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.598 qpair failed and we were unable to recover it. 00:25:06.598 [2024-07-12 17:14:06.280574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.598 [2024-07-12 17:14:06.280663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.598 [2024-07-12 17:14:06.280687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.598 [2024-07-12 17:14:06.280702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.598 [2024-07-12 17:14:06.280729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.598 [2024-07-12 17:14:06.280769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.598 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.290619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.290709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.290734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.290758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.290771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.290802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.300704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.300816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.300842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.300857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.300874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.300904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.310698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.310815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.310842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.310857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.310870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.310900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.320703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.320816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.320841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.320856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.320868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.320899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.330777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.330864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.330888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.330903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.330916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.330947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.340844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.340948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.340975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.340990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.341003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.341048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.350818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.350964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.350991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.351007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.351020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.351065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.360817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.360964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.360990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.361005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.361018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.361048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.370869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.370990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.371031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.371047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.371059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.371089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.856 [2024-07-12 17:14:06.380900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.856 [2024-07-12 17:14:06.380991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.856 [2024-07-12 17:14:06.381016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.856 [2024-07-12 17:14:06.381046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.856 [2024-07-12 17:14:06.381058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.856 [2024-07-12 17:14:06.381087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.856 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.390928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.391041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.391066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.391080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.391097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.391128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.400925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.401022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.401064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.401080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.401092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.401122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.410944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.411038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.411065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.411079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.411091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.411121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.421096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.421182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.421206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.421220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.421233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.421262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.431045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.431156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.431181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.431195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.431207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.431236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.441127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.441225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.441249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.441263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.441275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.441305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.451072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.451190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.451216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.451231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.451243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.451273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.461100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.461193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.461218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.461233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.461245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.461274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.471131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.471224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.471248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.471262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.471274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.471303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.481198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.481285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.481311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.481331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.481344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.481373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.491177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.491276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.491302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.491317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.491329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.491359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.501240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.501361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.501386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.501401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.501414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.501443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.511280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.511372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.857 [2024-07-12 17:14:06.511396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.857 [2024-07-12 17:14:06.511410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.857 [2024-07-12 17:14:06.511423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.857 [2024-07-12 17:14:06.511452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.857 qpair failed and we were unable to recover it. 00:25:06.857 [2024-07-12 17:14:06.521276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.857 [2024-07-12 17:14:06.521365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.858 [2024-07-12 17:14:06.521390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.858 [2024-07-12 17:14:06.521405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.858 [2024-07-12 17:14:06.521417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.858 [2024-07-12 17:14:06.521446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.858 qpair failed and we were unable to recover it. 00:25:06.858 [2024-07-12 17:14:06.531316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.858 [2024-07-12 17:14:06.531398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.858 [2024-07-12 17:14:06.531423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.858 [2024-07-12 17:14:06.531437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.858 [2024-07-12 17:14:06.531449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.858 [2024-07-12 17:14:06.531478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.858 qpair failed and we were unable to recover it. 00:25:06.858 [2024-07-12 17:14:06.541326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.858 [2024-07-12 17:14:06.541447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.858 [2024-07-12 17:14:06.541473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.858 [2024-07-12 17:14:06.541489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.858 [2024-07-12 17:14:06.541501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:06.858 [2024-07-12 17:14:06.541530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:06.858 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.551353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.551444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.551468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.551483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.551495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.551525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.561389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.561473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.561498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.561512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.561525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.561554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.571418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.571505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.571533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.571549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.571562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.571591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.581445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.581552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.581578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.581593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.581605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.581634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.591551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.591680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.591707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.591722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.591734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.591789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.601489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.601574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.601598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.601612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.601625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.601654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.611539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.611639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.611662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.611677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.611690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.611747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.621546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.621633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.621657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.621671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.621684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.621713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.631592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.631683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.631706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.631735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.631756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.631787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.641617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.641744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.641769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.641784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.641797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.641828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.651613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.651704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.651752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.651769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.651782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.651813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.661643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.661747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.661778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.661794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.661807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.661837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.671707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.671875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.671902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.671918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.671930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.118 [2024-07-12 17:14:06.671960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.118 qpair failed and we were unable to recover it. 00:25:07.118 [2024-07-12 17:14:06.681749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.118 [2024-07-12 17:14:06.681842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.118 [2024-07-12 17:14:06.681869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.118 [2024-07-12 17:14:06.681884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.118 [2024-07-12 17:14:06.681896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.681927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.691776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.691867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.691892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.691907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.691919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.691950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.701859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.701950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.701975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.701990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.702002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.702053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.711822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.711922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.711946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.711961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.711974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.712004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.721865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.721959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.721984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.721999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.722011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.722041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.731884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.731970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.731995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.732010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.732023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.732068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.741966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.742075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.742100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.742115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.742128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.742157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.751976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.752119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.752145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.752160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.752172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.752201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.761949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.762056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.762081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.762096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.762107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.762137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.771979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.772082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.772105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.772120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.772132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.772161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.782039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.782144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.782169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.782184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.782196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.782226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.792057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.792171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.792196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.792211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.792228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.792258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.119 [2024-07-12 17:14:06.802085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.119 [2024-07-12 17:14:06.802177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.119 [2024-07-12 17:14:06.802202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.119 [2024-07-12 17:14:06.802216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.119 [2024-07-12 17:14:06.802228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.119 [2024-07-12 17:14:06.802258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.119 qpair failed and we were unable to recover it. 00:25:07.378 [2024-07-12 17:14:06.812130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.378 [2024-07-12 17:14:06.812220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.378 [2024-07-12 17:14:06.812245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.378 [2024-07-12 17:14:06.812259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.378 [2024-07-12 17:14:06.812272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.378 [2024-07-12 17:14:06.812301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.378 qpair failed and we were unable to recover it. 00:25:07.378 [2024-07-12 17:14:06.822130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.822221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.822245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.822259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.822271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.822301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.832181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.832272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.832296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.832311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.832324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.832353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.842216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.842337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.842363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.842378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.842390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.842419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.852238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.852326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.852350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.852365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.852377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.852406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.862258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.862343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.862367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.862381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.862393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.862422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.872303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.872395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.872418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.872432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.872444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.872474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.882311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.882402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.882426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.882448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.882461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.882491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.892295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.892387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.892411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.892426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.892438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.892467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.902361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.902476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.902501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.902517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.902529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.902558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.912361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.912463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.912488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.912503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.912515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.912544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.922412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.922553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.922579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.922594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.922606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.922644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.932524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.932629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.932654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.932668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.932680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.932710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.942517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.942609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.942634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.942649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.942661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.942690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.952537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.379 [2024-07-12 17:14:06.952638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.379 [2024-07-12 17:14:06.952662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.379 [2024-07-12 17:14:06.952676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.379 [2024-07-12 17:14:06.952688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.379 [2024-07-12 17:14:06.952751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.379 qpair failed and we were unable to recover it. 00:25:07.379 [2024-07-12 17:14:06.962559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:06.962655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:06.962680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:06.962694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:06.962707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:06.962760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:06.972543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:06.972637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:06.972666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:06.972681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:06.972694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:06.972723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:06.982581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:06.982698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:06.982753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:06.982772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:06.982785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:06.982829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:06.992664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:06.992773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:06.992799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:06.992813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:06.992826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:06.992857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.002635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.002761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.002786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.002801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.002813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.002844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.012631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.012732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.012780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.012795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.012808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.012844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.022715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.022866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.022893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.022908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.022922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.022951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.032788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.032890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.032917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.032932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.032945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.032976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.042790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.042909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.042936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.042951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.042963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.042993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.052868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.052958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.052985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.053001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.053013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.053058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.380 [2024-07-12 17:14:07.062829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.380 [2024-07-12 17:14:07.062925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.380 [2024-07-12 17:14:07.062956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.380 [2024-07-12 17:14:07.062973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.380 [2024-07-12 17:14:07.062985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.380 [2024-07-12 17:14:07.063015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.380 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.072843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.639 [2024-07-12 17:14:07.072975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.639 [2024-07-12 17:14:07.073002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.639 [2024-07-12 17:14:07.073017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.639 [2024-07-12 17:14:07.073029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.639 [2024-07-12 17:14:07.073070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.639 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.082861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.639 [2024-07-12 17:14:07.082959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.639 [2024-07-12 17:14:07.082987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.639 [2024-07-12 17:14:07.083002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.639 [2024-07-12 17:14:07.083014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.639 [2024-07-12 17:14:07.083060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.639 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.092910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.639 [2024-07-12 17:14:07.093007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.639 [2024-07-12 17:14:07.093032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.639 [2024-07-12 17:14:07.093047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.639 [2024-07-12 17:14:07.093075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.639 [2024-07-12 17:14:07.093105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.639 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.102906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.639 [2024-07-12 17:14:07.103005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.639 [2024-07-12 17:14:07.103046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.639 [2024-07-12 17:14:07.103061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.639 [2024-07-12 17:14:07.103074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.639 [2024-07-12 17:14:07.103117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.639 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.113048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.639 [2024-07-12 17:14:07.113152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.639 [2024-07-12 17:14:07.113175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.639 [2024-07-12 17:14:07.113200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.639 [2024-07-12 17:14:07.113211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.639 [2024-07-12 17:14:07.113241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.639 qpair failed and we were unable to recover it. 00:25:07.639 [2024-07-12 17:14:07.122998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.123146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.123171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.123186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.123198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.123237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.133075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.133170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.133194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.133209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.133221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.133251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.143039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.143147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.143182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.143196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.143208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.143237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.153101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.153224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.153250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.153265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.153277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.153306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.163125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.163250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.163276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.163291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.163303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.163332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.173143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.173232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.173257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.173272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.173284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.173314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.183179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.183273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.183297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.183311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.183323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.183351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.193300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.193445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.193469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.193483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.193511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.193540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.203315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.203411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.203435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.203449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.203465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.203494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.213263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.213354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.213379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.213394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.213406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.213436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.223245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.223337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.223361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.223376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.223388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.223417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.233360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.233486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.233511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.233526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.233538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.233568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.640 qpair failed and we were unable to recover it. 00:25:07.640 [2024-07-12 17:14:07.243361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.640 [2024-07-12 17:14:07.243496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.640 [2024-07-12 17:14:07.243521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.640 [2024-07-12 17:14:07.243536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.640 [2024-07-12 17:14:07.243548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.640 [2024-07-12 17:14:07.243577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.253369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.253456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.253481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.253495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.253508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.253537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.263459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.263544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.263569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.263583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.263595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.263624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.273443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.273539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.273562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.273577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.273589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.273619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.283472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.283748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.283776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.283798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.283812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.283843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.293479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.293571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.293595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.293609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.293622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.293652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.303504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.303598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.303621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.303635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.303648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.303677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.313544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.313638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.313662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.313677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.313689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.313733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.641 [2024-07-12 17:14:07.323558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.641 [2024-07-12 17:14:07.323645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.641 [2024-07-12 17:14:07.323669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.641 [2024-07-12 17:14:07.323684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.641 [2024-07-12 17:14:07.323696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.641 [2024-07-12 17:14:07.323748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.641 qpair failed and we were unable to recover it. 00:25:07.899 [2024-07-12 17:14:07.333619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.899 [2024-07-12 17:14:07.333706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.899 [2024-07-12 17:14:07.333731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.899 [2024-07-12 17:14:07.333755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.899 [2024-07-12 17:14:07.333769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.899 [2024-07-12 17:14:07.333800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.899 qpair failed and we were unable to recover it. 00:25:07.899 [2024-07-12 17:14:07.343595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.899 [2024-07-12 17:14:07.343681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.899 [2024-07-12 17:14:07.343706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.899 [2024-07-12 17:14:07.343746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.899 [2024-07-12 17:14:07.343763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.899 [2024-07-12 17:14:07.343795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.899 qpair failed and we were unable to recover it. 00:25:07.899 [2024-07-12 17:14:07.353727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.899 [2024-07-12 17:14:07.353835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.899 [2024-07-12 17:14:07.353859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.899 [2024-07-12 17:14:07.353874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.899 [2024-07-12 17:14:07.353887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.899 [2024-07-12 17:14:07.353918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.899 qpair failed and we were unable to recover it. 00:25:07.899 [2024-07-12 17:14:07.363650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.899 [2024-07-12 17:14:07.363778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.899 [2024-07-12 17:14:07.363803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.899 [2024-07-12 17:14:07.363819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.899 [2024-07-12 17:14:07.363832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.899 [2024-07-12 17:14:07.363862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.899 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.373690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.373805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.373838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.373861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.373874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.373906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.383773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.383899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.383925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.383940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.383953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.383984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.393805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.393904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.393928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.393943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.393957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.393987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.404066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.404169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.404193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.404208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.404220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.404250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.413866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.413966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.413990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.414005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.414017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.414063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.423926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.424086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.424111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.424126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.424138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.424168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.433945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.434057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.434082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.434096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.434109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.434138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.443965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.444067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.444092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.444106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.444118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.444148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.454015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.454123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.454147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.454162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.454174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.454204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.464036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.464127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.464155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.464170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.464182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.464212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.473986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.474088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.474129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.474144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.474156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.474187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.484052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.484148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.484172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.484186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.484199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.484228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.494048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.494140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.494164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.494179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.494191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.494220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.504089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.504182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.900 [2024-07-12 17:14:07.504206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.900 [2024-07-12 17:14:07.504221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.900 [2024-07-12 17:14:07.504233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.900 [2024-07-12 17:14:07.504268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.900 qpair failed and we were unable to recover it. 00:25:07.900 [2024-07-12 17:14:07.514108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.900 [2024-07-12 17:14:07.514203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.514228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.514243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.514255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.514284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.524130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.524224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.524251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.524266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.524278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.524307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.534136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.534224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.534248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.534262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.534275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.534304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.544246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.544335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.544359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.544373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.544385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.544421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.554248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.554339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.554367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.554382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.554394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.554424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.564243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.564336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.564359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.564374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.564386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.564415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.574260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.574345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.574369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.574383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.574395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.574424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:07.901 [2024-07-12 17:14:07.584276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.901 [2024-07-12 17:14:07.584361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.901 [2024-07-12 17:14:07.584385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.901 [2024-07-12 17:14:07.584399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.901 [2024-07-12 17:14:07.584411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:07.901 [2024-07-12 17:14:07.584440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.901 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.594339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.594441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.594467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.594482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.594499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.594529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.604351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.604477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.604503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.604518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.604530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.604559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.614364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.614455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.614481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.614496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.614509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.614539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.624391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.624509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.624536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.624551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.624563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.624592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.634478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.634578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.634602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.634617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.634629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.634663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.644438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.644592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.644617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.644632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.644645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.644674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.654552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.654645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.654670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.654684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.654697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.654726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.664545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.664662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.664686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.664701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.664728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.664767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.674575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.674673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.674697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.674711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.674723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.674775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.684604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.684728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.684778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.684802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.684816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.684847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.694660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.694781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.694806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.694821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.694834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.694873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.704642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.704759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.160 [2024-07-12 17:14:07.704784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.160 [2024-07-12 17:14:07.704799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.160 [2024-07-12 17:14:07.704812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.160 [2024-07-12 17:14:07.704843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.160 qpair failed and we were unable to recover it. 00:25:08.160 [2024-07-12 17:14:07.714667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.160 [2024-07-12 17:14:07.714787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.714812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.714826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.714839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.714870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.724690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.724807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.724832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.724847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.724859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.724890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.734734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.734847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.734872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.734887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.734899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.734929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.744774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.744869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.744894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.744909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.744921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.744952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.754807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.754902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.754926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.754941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.754953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.754984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.764915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.765053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.765077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.765091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.765103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.765133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.774826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.774916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.774941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.774961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.774975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.775006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.784884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.784985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.785010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.785025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.785053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.785082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.794959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.795062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.795086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.795101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.795113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.795142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.804930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.805021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.805060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.805075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.805087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.805117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.814964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.815055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.815079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.815109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.815122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.815152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.825053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.825146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.825170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.825184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.825197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.825226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.835053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.835160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.835185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.835200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.835212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.835242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.161 [2024-07-12 17:14:07.845080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.161 [2024-07-12 17:14:07.845171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.161 [2024-07-12 17:14:07.845197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.161 [2024-07-12 17:14:07.845211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.161 [2024-07-12 17:14:07.845223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.161 [2024-07-12 17:14:07.845253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.161 qpair failed and we were unable to recover it. 00:25:08.420 [2024-07-12 17:14:07.855066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.420 [2024-07-12 17:14:07.855196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.420 [2024-07-12 17:14:07.855221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.420 [2024-07-12 17:14:07.855236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.420 [2024-07-12 17:14:07.855249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.420 [2024-07-12 17:14:07.855279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.420 qpair failed and we were unable to recover it. 00:25:08.420 [2024-07-12 17:14:07.865127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.420 [2024-07-12 17:14:07.865216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.420 [2024-07-12 17:14:07.865244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.420 [2024-07-12 17:14:07.865260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.420 [2024-07-12 17:14:07.865273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.420 [2024-07-12 17:14:07.865302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.420 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.875138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.875239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.875264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.875279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.875291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.875321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.885159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.885265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.885289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.885303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.885315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.885345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.895171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.895260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.895285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.895300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.895312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.895352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.905221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.905353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.905380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.905395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.905407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.905443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.915304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.915406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.915430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.915445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.915458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.915487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.925253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.925341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.925365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.925380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.925392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.925421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.935324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.935418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.935442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.935456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.935469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.935498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.945319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.945404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.945429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.945444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.945455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.945485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.955415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.955560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.955591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.955607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.955620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.955650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.965370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.965461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.965485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.965500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.965513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.965542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.975363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.975458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.975482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.975497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.975509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.975539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.985432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.985520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.985544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.985558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.985571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.985600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:07.995511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:07.995605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.421 [2024-07-12 17:14:07.995628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.421 [2024-07-12 17:14:07.995642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.421 [2024-07-12 17:14:07.995660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.421 [2024-07-12 17:14:07.995689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.421 qpair failed and we were unable to recover it. 00:25:08.421 [2024-07-12 17:14:08.005495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.421 [2024-07-12 17:14:08.005584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.005608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.005622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.005634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.005664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.015542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.015677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.015701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.015731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.015753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.015785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.025514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.025608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.025632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.025646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.025658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.025687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.035548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.035640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.035665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.035680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.035693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.035723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.045641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.045768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.045794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.045809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.045822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.045853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.055728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.055844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.055871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.055886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.055899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.055930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.065615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.065705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.065731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.065768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.065783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.065814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.075661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.075776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.075801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.075816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.075828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.075859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.085692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.085806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.085832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.085847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.085865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.085896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.095761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.095868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.095893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.095907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.095920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.095951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.422 [2024-07-12 17:14:08.105814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.422 [2024-07-12 17:14:08.105925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.422 [2024-07-12 17:14:08.105950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.422 [2024-07-12 17:14:08.105965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.422 [2024-07-12 17:14:08.105977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.422 [2024-07-12 17:14:08.106019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.422 qpair failed and we were unable to recover it. 00:25:08.681 [2024-07-12 17:14:08.115855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.681 [2024-07-12 17:14:08.115956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.681 [2024-07-12 17:14:08.115991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.681 [2024-07-12 17:14:08.116006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.681 [2024-07-12 17:14:08.116019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.681 [2024-07-12 17:14:08.116065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.681 qpair failed and we were unable to recover it. 00:25:08.681 [2024-07-12 17:14:08.125824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.681 [2024-07-12 17:14:08.125971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.681 [2024-07-12 17:14:08.125999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.681 [2024-07-12 17:14:08.126014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.681 [2024-07-12 17:14:08.126027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.681 [2024-07-12 17:14:08.126074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.135830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.135921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.135948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.135963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.135976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.136007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.145936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.146054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.146078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.146093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.146106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.146135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.155993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.156119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.156146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.156160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.156172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.156201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.165947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.166072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.166097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.166112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.166124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.166154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.175962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.176065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.176088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.176108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.176129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.176159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.186049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.186156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.186180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.186194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.186206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.186235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.196038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.196149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.196174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.196189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.196202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.196233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.206068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.206156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.206180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.206194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.206206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.206236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.216076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.216163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.216187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.216202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.216214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.216243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.226105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.226218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.226243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.226258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.226270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.226300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.236131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.236222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.236245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.236259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.236271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.236300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.246169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.246261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.246285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.246300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.246312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.246341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.256205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.682 [2024-07-12 17:14:08.256344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.682 [2024-07-12 17:14:08.256370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.682 [2024-07-12 17:14:08.256385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.682 [2024-07-12 17:14:08.256397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.682 [2024-07-12 17:14:08.256426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.682 qpair failed and we were unable to recover it. 00:25:08.682 [2024-07-12 17:14:08.266223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.266335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.266365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.266380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.266393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.266422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.276247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.276342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.276366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.276380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.276392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.276421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.286278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.286376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.286402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.286416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.286428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.286457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.296279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.296366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.296390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.296404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.296417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.296445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.306325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.306418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.306442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.306457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.306469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.306505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.316358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.316452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.316477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.316491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.316503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.316532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.326349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.326478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.326504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.326519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.326530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.326560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.336411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.336527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.336552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.336567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.336579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.336609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.346398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.346493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.346519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.346534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.346545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.346574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.356486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.356590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.356620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.356635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.356648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.356677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.683 [2024-07-12 17:14:08.366553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.683 [2024-07-12 17:14:08.366652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.683 [2024-07-12 17:14:08.366677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.683 [2024-07-12 17:14:08.366691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.683 [2024-07-12 17:14:08.366703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.683 [2024-07-12 17:14:08.366755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.683 qpair failed and we were unable to recover it. 00:25:08.942 [2024-07-12 17:14:08.376573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.942 [2024-07-12 17:14:08.376686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.942 [2024-07-12 17:14:08.376712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.942 [2024-07-12 17:14:08.376750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.376764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.376795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.386501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.386598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.386623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.386637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.386649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.386678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.396626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.396722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.396754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.396770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.396787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.396819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.406567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.406658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.406682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.406697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.406709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.406763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.416650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.416772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.416798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.416814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.416827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.416856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.426669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.426784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.426810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.426825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.426838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.426868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.436734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.436867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.436894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.436909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.436922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.436952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.446722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.446874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.446901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.446916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.446928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.446958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.456769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.456863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.456888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.456903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.456916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.456946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.466763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.466852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.466876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.466891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.466904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.466934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.476801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.476902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.476926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.476942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.476954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.476985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.486826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.486925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.486951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.486966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.486983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.487028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.496846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.496944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.496970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.496985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.496997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.497028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.943 [2024-07-12 17:14:08.506951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.943 [2024-07-12 17:14:08.507050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.943 [2024-07-12 17:14:08.507074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.943 [2024-07-12 17:14:08.507088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.943 [2024-07-12 17:14:08.507101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.943 [2024-07-12 17:14:08.507130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.943 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.516924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.517035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.517059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.517073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.517086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.517115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.526956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.527073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.527099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.527114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.527126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.527156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.536969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.537076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.537101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.537115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.537128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.537157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.546990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.547091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.547117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.547132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.547144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.547173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.557021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.557139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.557164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.557179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.557191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.557221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.567038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.567129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.567153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.567167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.567179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.567209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.577133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.577234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.577259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.577280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.577293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.577323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.587097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.587184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.587208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.587222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.587234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.587263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.597183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.597282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.597305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.597320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.597332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.597361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.607164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.607265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.607289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.607303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.607315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.607344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.617195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.617282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.617306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.617320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.617332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.617361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:08.944 [2024-07-12 17:14:08.627241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.944 [2024-07-12 17:14:08.627332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.944 [2024-07-12 17:14:08.627355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.944 [2024-07-12 17:14:08.627369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.944 [2024-07-12 17:14:08.627382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:08.944 [2024-07-12 17:14:08.627410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:08.944 qpair failed and we were unable to recover it. 00:25:09.203 [2024-07-12 17:14:08.637258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.203 [2024-07-12 17:14:08.637349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.203 [2024-07-12 17:14:08.637374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.203 [2024-07-12 17:14:08.637388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.203 [2024-07-12 17:14:08.637401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.203 [2024-07-12 17:14:08.637430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.203 qpair failed and we were unable to recover it. 00:25:09.203 [2024-07-12 17:14:08.647269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.203 [2024-07-12 17:14:08.647389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.203 [2024-07-12 17:14:08.647414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.203 [2024-07-12 17:14:08.647429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.647441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.647470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.657342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.657443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.657468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.657483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.657495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.657524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.667335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.667423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.667452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.667467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.667480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.667509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.677364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.677458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.677481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.677496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.677508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.677538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.687418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.687528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.687555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.687571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.687583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.687613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.697381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.697464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.697488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.697502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.697514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.697543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.707438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.707521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.707546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.707561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.707573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.707608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.717476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.717584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.717610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.717625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.717637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.717666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.727460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.727550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.727574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.727589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.727601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.727630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.737484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.737593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.737618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.737633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.737645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.737674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.747546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.747631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.747655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.747669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.747681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.747710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.757589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.757699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.757752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.757770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.757783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.757814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.767611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.767699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.767744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.767762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.767775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.767806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.777601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.777692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.777716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.777755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.777768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.777799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.787671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.787795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.787821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.787837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.787849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.787880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.797660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.797811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.797838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.797853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.797865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.797901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.807746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.807888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.807915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.807930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.807942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.807973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.817793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.817890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.817915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.817930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.817942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.817973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.827760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.827846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.827872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.827888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.827900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.827931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.837810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.837904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.837929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.837944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.837956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.204 [2024-07-12 17:14:08.837986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.204 qpair failed and we were unable to recover it. 00:25:09.204 [2024-07-12 17:14:08.847811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.204 [2024-07-12 17:14:08.847919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.204 [2024-07-12 17:14:08.847944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.204 [2024-07-12 17:14:08.847959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.204 [2024-07-12 17:14:08.847971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.205 [2024-07-12 17:14:08.848002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.205 qpair failed and we were unable to recover it. 00:25:09.205 [2024-07-12 17:14:08.857833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.205 [2024-07-12 17:14:08.857924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.205 [2024-07-12 17:14:08.857949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.205 [2024-07-12 17:14:08.857964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.205 [2024-07-12 17:14:08.857976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.205 [2024-07-12 17:14:08.858007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.205 qpair failed and we were unable to recover it. 00:25:09.205 [2024-07-12 17:14:08.867957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.205 [2024-07-12 17:14:08.868079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.205 [2024-07-12 17:14:08.868105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.205 [2024-07-12 17:14:08.868120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.205 [2024-07-12 17:14:08.868132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.205 [2024-07-12 17:14:08.868161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.205 qpair failed and we were unable to recover it. 00:25:09.205 [2024-07-12 17:14:08.877968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.205 [2024-07-12 17:14:08.878076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.205 [2024-07-12 17:14:08.878100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.205 [2024-07-12 17:14:08.878115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.205 [2024-07-12 17:14:08.878126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.205 [2024-07-12 17:14:08.878155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.205 qpair failed and we were unable to recover it. 00:25:09.205 [2024-07-12 17:14:08.887953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.205 [2024-07-12 17:14:08.888074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.205 [2024-07-12 17:14:08.888099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.205 [2024-07-12 17:14:08.888114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.205 [2024-07-12 17:14:08.888131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.205 [2024-07-12 17:14:08.888161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.205 qpair failed and we were unable to recover it. 00:25:09.464 [2024-07-12 17:14:08.897964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.898073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.898098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.898112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.898125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.898155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.908022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.908121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.908146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.908160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.908172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.908202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.918054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.918201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.918227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.918241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.918254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.918283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.928070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.928178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.928203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.928218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.928230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.928259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.938096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.938186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.938210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.938224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.938236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.938265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.948229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.948316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.948340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.948355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.948367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.948396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.958170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.958277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.958301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.958315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.958327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.958355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.968178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.968270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.968294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.968308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.968319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.968348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.978198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.978287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.978311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.978331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.978344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.978373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.988253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.988362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.988388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.988403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.988415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.988443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:08.998303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:08.998425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:08.998451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:08.998465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:08.998478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:08.998507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:09.008316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:09.008453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:09.008479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:09.008494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:09.008506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:09.008537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:09.018280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:09.018399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:09.018425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:09.018440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:09.018452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:09.018481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:09.028354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:09.028439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:09.028463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.465 [2024-07-12 17:14:09.028478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.465 [2024-07-12 17:14:09.028490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.465 [2024-07-12 17:14:09.028519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.465 qpair failed and we were unable to recover it. 00:25:09.465 [2024-07-12 17:14:09.038365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.465 [2024-07-12 17:14:09.038467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.465 [2024-07-12 17:14:09.038492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.038507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.038519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.038549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.048405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.048495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.048520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.048535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.048547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.048576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.058398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.058535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.058560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.058575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.058587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.058616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.068432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.068520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.068544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.068563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.068576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.068605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.078539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.078629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.078653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.078668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.078680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.078709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.088576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.088664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.088691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.088706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.088733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.088773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.098611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.098698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.098721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.098759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.098776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.098813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.108579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.108664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.108688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.108703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.108715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.108772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.118603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.118697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.118743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.118762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.118775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.118806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.128630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.128735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.128773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.128788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.128802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.128834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.138676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.138800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.138828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.138844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.138856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.138887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.466 [2024-07-12 17:14:09.148698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.466 [2024-07-12 17:14:09.148868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.466 [2024-07-12 17:14:09.148895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.466 [2024-07-12 17:14:09.148910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.466 [2024-07-12 17:14:09.148922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.466 [2024-07-12 17:14:09.148953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.466 qpair failed and we were unable to recover it. 00:25:09.725 [2024-07-12 17:14:09.158820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.725 [2024-07-12 17:14:09.158944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.725 [2024-07-12 17:14:09.158976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.725 [2024-07-12 17:14:09.158993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.725 [2024-07-12 17:14:09.159005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.725 [2024-07-12 17:14:09.159051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.725 qpair failed and we were unable to recover it. 00:25:09.725 [2024-07-12 17:14:09.168819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.725 [2024-07-12 17:14:09.168917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.725 [2024-07-12 17:14:09.168942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.725 [2024-07-12 17:14:09.168957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.725 [2024-07-12 17:14:09.168970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.725 [2024-07-12 17:14:09.169001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.725 qpair failed and we were unable to recover it. 00:25:09.725 [2024-07-12 17:14:09.178801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.725 [2024-07-12 17:14:09.178898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.725 [2024-07-12 17:14:09.178925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.725 [2024-07-12 17:14:09.178941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.725 [2024-07-12 17:14:09.178953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.725 [2024-07-12 17:14:09.178983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.725 qpair failed and we were unable to recover it. 00:25:09.725 [2024-07-12 17:14:09.188795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.725 [2024-07-12 17:14:09.188911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.725 [2024-07-12 17:14:09.188938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.725 [2024-07-12 17:14:09.188953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.725 [2024-07-12 17:14:09.188966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.725 [2024-07-12 17:14:09.188996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.725 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.198839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.198945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.198970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.198984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.198996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.199031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.208943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.209059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.209085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.209099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.209111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.209141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.218999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.219134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.219167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.219182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.219195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.219224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.228957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.229059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.229085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.229100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.229112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.229141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.239006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.239115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.239140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.239155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.239167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.239197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.248980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.249066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.249097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.249128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.249140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.249170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.259052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.259145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.259168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.259182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.259194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.259223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.269121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.269216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.269240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.269254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.269266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.269296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.279092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.279192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.279216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.279230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.279242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.279272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.289125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.289219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.289244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.289259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.289277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.289306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.299110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.299196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.299221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.299235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.299248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.299278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.309148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.309236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.309259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.309274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.309286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.309315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.319189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.319294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.319319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.319333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.319345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.319375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.329215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.329333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.329359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.726 [2024-07-12 17:14:09.329374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.726 [2024-07-12 17:14:09.329386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.726 [2024-07-12 17:14:09.329422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.726 qpair failed and we were unable to recover it. 00:25:09.726 [2024-07-12 17:14:09.339268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.726 [2024-07-12 17:14:09.339357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.726 [2024-07-12 17:14:09.339381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.339395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.339408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.339437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.349302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.349398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.349423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.349437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.349450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.349479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.359363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.359454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.359478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.359493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.359505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.359534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.369387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.369480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.369504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.369518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.369530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.369560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.379372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.379462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.379487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.379508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.379521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.379550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.389444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.389575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.389602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.389617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.389629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.389663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.399489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.399581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.399606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.399621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.399634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.399663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.727 [2024-07-12 17:14:09.409487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.727 [2024-07-12 17:14:09.409577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.727 [2024-07-12 17:14:09.409602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.727 [2024-07-12 17:14:09.409617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.727 [2024-07-12 17:14:09.409630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.727 [2024-07-12 17:14:09.409660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.727 qpair failed and we were unable to recover it. 00:25:09.985 [2024-07-12 17:14:09.419557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.985 [2024-07-12 17:14:09.419708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.985 [2024-07-12 17:14:09.419736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.985 [2024-07-12 17:14:09.419761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.985 [2024-07-12 17:14:09.419774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.985 [2024-07-12 17:14:09.419805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.429532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.429619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.429643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.429657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.429669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.429699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.439609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.439702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.439750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.439768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.439780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.439812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.449609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.449764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.449790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.449811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.449824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.449855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.459616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.459744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.459771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.459787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.459799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.459830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.469611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.469699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.469744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.469767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.469781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.469812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.479764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.479869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.479893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.479909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.479921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.479951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.489759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.489858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.489883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.489898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.489910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.489941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.499682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.499791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.499818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.499833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.499846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.499875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.509836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.509926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.509951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.509967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.509980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.510009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.519812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.519930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.519955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.519970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.519982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.520013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.529792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.529886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.529911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.529926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.529939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.529970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.539817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.539904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.539928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.539944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.539957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.539987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.549883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.549975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.550000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.550015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.550028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.550059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.559913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.986 [2024-07-12 17:14:09.560011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.986 [2024-07-12 17:14:09.560053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.986 [2024-07-12 17:14:09.560069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.986 [2024-07-12 17:14:09.560082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.986 [2024-07-12 17:14:09.560112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.986 qpair failed and we were unable to recover it. 00:25:09.986 [2024-07-12 17:14:09.569920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.570033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.570058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.570072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.570085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.570115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.579963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.580065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.580089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.580104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.580116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.580146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.590018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.590125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.590149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.590165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.590178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.590207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.600101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.600196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.600221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.600236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.600248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.600282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.610057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.610149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.610174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.610188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.610200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.610229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.620073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.620161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.620184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.620199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.620211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.620241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.630090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.630184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.630208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.630222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.630235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.630264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.640127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.640228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.640253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.640267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.640280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.640309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.650142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.650246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.650276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.650292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.650304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.650334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.660189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.660285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.660309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.660325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.660338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.660368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:09.987 [2024-07-12 17:14:09.670277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.987 [2024-07-12 17:14:09.670366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.987 [2024-07-12 17:14:09.670391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.987 [2024-07-12 17:14:09.670405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.987 [2024-07-12 17:14:09.670417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:09.987 [2024-07-12 17:14:09.670447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:09.987 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.680318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.680416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.680441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.680456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.680468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.680498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.690283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.690400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.690425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.690440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.690457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.690488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.700302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.700389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.700414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.700428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.700441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.700471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.710338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.710430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.710455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.710470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.710482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.710512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.720404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.720499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.720523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.720538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.720551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.720580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.730411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.730534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.730558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.730572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.730584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.730614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.740476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.740618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.740643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.246 [2024-07-12 17:14:09.740657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.246 [2024-07-12 17:14:09.740670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.246 [2024-07-12 17:14:09.740700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.246 qpair failed and we were unable to recover it. 00:25:10.246 [2024-07-12 17:14:09.750473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.246 [2024-07-12 17:14:09.750561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.246 [2024-07-12 17:14:09.750585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.750600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.750613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.750642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.760555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.760677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.760701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.760716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.760730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.760784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.770504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.770595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.770619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.770633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.770646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.770675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.780498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.780581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.780606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.780620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.780640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.780671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.790555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.790658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.790684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.790699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.790712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.790768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.800593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.800682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.800706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.800736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.800762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.800792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.810601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.810689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.810713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.810728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.810763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.810796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.820643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.820789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.820815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.820831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.820844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.820875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.830634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.830756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.830784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.830799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.830811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.830843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.840682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.840812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.840838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.840853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.840866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.840897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.850856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.850950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.850977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.850992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.851004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.851050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.860764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.860908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.860935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.860951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.860964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.860995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.870844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.870947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.870974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.870997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.871010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.871056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.880811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.880904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.880930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.880945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.880958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.880988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.247 qpair failed and we were unable to recover it. 00:25:10.247 [2024-07-12 17:14:09.890820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.247 [2024-07-12 17:14:09.890915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.247 [2024-07-12 17:14:09.890942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.247 [2024-07-12 17:14:09.890957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.247 [2024-07-12 17:14:09.890970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.247 [2024-07-12 17:14:09.891000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.248 qpair failed and we were unable to recover it. 00:25:10.248 [2024-07-12 17:14:09.900862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.248 [2024-07-12 17:14:09.900952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.248 [2024-07-12 17:14:09.900977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.248 [2024-07-12 17:14:09.900992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.248 [2024-07-12 17:14:09.901005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.248 [2024-07-12 17:14:09.901035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.248 qpair failed and we were unable to recover it. 00:25:10.248 [2024-07-12 17:14:09.910910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.248 [2024-07-12 17:14:09.911001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.248 [2024-07-12 17:14:09.911027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.248 [2024-07-12 17:14:09.911042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.248 [2024-07-12 17:14:09.911055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.248 [2024-07-12 17:14:09.911085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.248 qpair failed and we were unable to recover it. 00:25:10.248 [2024-07-12 17:14:09.920939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.248 [2024-07-12 17:14:09.921055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.248 [2024-07-12 17:14:09.921081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.248 [2024-07-12 17:14:09.921097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.248 [2024-07-12 17:14:09.921110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.248 [2024-07-12 17:14:09.921141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.248 qpair failed and we were unable to recover it. 00:25:10.248 [2024-07-12 17:14:09.930928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.248 [2024-07-12 17:14:09.931021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.248 [2024-07-12 17:14:09.931061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.248 [2024-07-12 17:14:09.931076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.248 [2024-07-12 17:14:09.931087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.248 [2024-07-12 17:14:09.931117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.248 qpair failed and we were unable to recover it. 00:25:10.506 [2024-07-12 17:14:09.941029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.506 [2024-07-12 17:14:09.941137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.506 [2024-07-12 17:14:09.941164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.506 [2024-07-12 17:14:09.941179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.506 [2024-07-12 17:14:09.941191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.506 [2024-07-12 17:14:09.941221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.506 qpair failed and we were unable to recover it. 00:25:10.506 [2024-07-12 17:14:09.950997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.506 [2024-07-12 17:14:09.951098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.506 [2024-07-12 17:14:09.951122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:09.951137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:09.951149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:09.951179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:09.961114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:09.961217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:09.961246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:09.961262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:09.961274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:09.961304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:09.971100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:09.971233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:09.971260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:09.971275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:09.971287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:09.971317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:09.981118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:09.981215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:09.981241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:09.981256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:09.981268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:09.981298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:09.991160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:09.991245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:09.991269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:09.991283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:09.991295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:09.991335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.001169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.001299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.001327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.001342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.001355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.001393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.011204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.011342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.011371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.011387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.011400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.011433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.021268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.021394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.021420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.021435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.021448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.021479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.031281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.031381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.031409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.031425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.031438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.031480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.041405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.041557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.041583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.041597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.041611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.041643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.051359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.051452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.051486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.051503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.051517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.051558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.061299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.061388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.061412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.061427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.061439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.061469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.071367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.071463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.071488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.071503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.071514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.071543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.081454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.081578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.081603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.081617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.507 [2024-07-12 17:14:10.081629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.507 [2024-07-12 17:14:10.081658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.507 qpair failed and we were unable to recover it. 00:25:10.507 [2024-07-12 17:14:10.091415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.507 [2024-07-12 17:14:10.091507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.507 [2024-07-12 17:14:10.091531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.507 [2024-07-12 17:14:10.091546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.091563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.091593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.101424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.101511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.101535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.101549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.101562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.101592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.111454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.111579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.111603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.111617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.111630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.111660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.121552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.121648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.121672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.121687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.121700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.121751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.131513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.131613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.131636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.131651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.131663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.131692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.141495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.141587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.141612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.141627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.141638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.141668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.151542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.151651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.151676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.151691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.151703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.151760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.161610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.161704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.161757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.161775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.161787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.161818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.171642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.171764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.171790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.171806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.171819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.171849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.181678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.181810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.181844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.181860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.181878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.181910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.508 [2024-07-12 17:14:10.191761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.508 [2024-07-12 17:14:10.191852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.508 [2024-07-12 17:14:10.191877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.508 [2024-07-12 17:14:10.191898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.508 [2024-07-12 17:14:10.191911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.508 [2024-07-12 17:14:10.191941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.508 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.201798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.201905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.201930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.201955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.201967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.201998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.211813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.211903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.211929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.211944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.211957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.211987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.221797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.221893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.221917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.221932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.221945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.221975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.231810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.231941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.231967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.231982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.231995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.232040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.241829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.241922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.241949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.241965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.241978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.242008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.251902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.251998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.252050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.252065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.252078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.252107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.261889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.261979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.262005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.262020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.262048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.262078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.271909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.271998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.272039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.272060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.272074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.272103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.281948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.282040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.282082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.282098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.282110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.282139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.291971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.292080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.292107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.292122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.292134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.292163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.301989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.302097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.302123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.302138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.302150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.302179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.312047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.312138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.312165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.312179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.312192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.312221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.322077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.322169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.322195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.322209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.322221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.322251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.767 [2024-07-12 17:14:10.332086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.767 [2024-07-12 17:14:10.332177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.767 [2024-07-12 17:14:10.332203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.767 [2024-07-12 17:14:10.332217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.767 [2024-07-12 17:14:10.332228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.767 [2024-07-12 17:14:10.332258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.767 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.342114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.342205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.342231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.342245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.342258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.342287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.352126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.352213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.352238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.352254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.352265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.352294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.362184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.362273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.362303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.362319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.362332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.362361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.372205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.372298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.372324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.372338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.372351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.372380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.382264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.382369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.382395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.382410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.382422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.382451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.392309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.392400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.392426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.392441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.392453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.392482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.402353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.402451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.402475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.402490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.402509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.402543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.412348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.412438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.412463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.412479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.412491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.412520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.422373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.422480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.422505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.422520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.422532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab6c000b90 00:25:10.768 [2024-07-12 17:14:10.422561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.432412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.432496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.432527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.432543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.432555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab7c000b90 00:25:10.768 [2024-07-12 17:14:10.432585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.442424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.442520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.442547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.442562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.442575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab7c000b90 00:25:10.768 [2024-07-12 17:14:10.442604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.768 qpair failed and we were unable to recover it. 00:25:10.768 [2024-07-12 17:14:10.452474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.768 [2024-07-12 17:14:10.452564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.768 [2024-07-12 17:14:10.452600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.768 [2024-07-12 17:14:10.452618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.768 [2024-07-12 17:14:10.452630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b68ea0 00:25:10.768 [2024-07-12 17:14:10.452660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.768 qpair failed and we were unable to recover it. 00:25:11.027 [2024-07-12 17:14:10.462469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.027 [2024-07-12 17:14:10.462552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.027 [2024-07-12 17:14:10.462581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.027 [2024-07-12 17:14:10.462596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.027 [2024-07-12 17:14:10.462609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b68ea0 00:25:11.027 [2024-07-12 17:14:10.462637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.027 qpair failed and we were unable to recover it. 00:25:11.027 [2024-07-12 17:14:10.462783] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:11.027 A controller has encountered a failure and is being reset. 00:25:11.027 [2024-07-12 17:14:10.472535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.027 [2024-07-12 17:14:10.472630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.027 [2024-07-12 17:14:10.472663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.027 [2024-07-12 17:14:10.472681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.027 [2024-07-12 17:14:10.472694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:11.027 [2024-07-12 17:14:10.472734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.027 qpair failed and we were unable to recover it. 00:25:11.027 [2024-07-12 17:14:10.482505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.027 [2024-07-12 17:14:10.482641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.027 [2024-07-12 17:14:10.482669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.027 [2024-07-12 17:14:10.482685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.027 [2024-07-12 17:14:10.482698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab74000b90 00:25:11.027 [2024-07-12 17:14:10.482728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.027 qpair failed and we were unable to recover it. 00:25:11.027 Controller properly reset. 00:25:11.027 Initializing NVMe Controllers 00:25:11.027 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:11.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:11.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:11.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:11.027 Initialization complete. Launching workers. 00:25:11.027 Starting thread on core 1 00:25:11.027 Starting thread on core 2 00:25:11.027 Starting thread on core 3 00:25:11.027 Starting thread on core 0 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:11.027 00:25:11.027 real 0m11.542s 00:25:11.027 user 0m21.636s 00:25:11.027 sys 0m5.553s 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.027 ************************************ 00:25:11.027 END TEST nvmf_target_disconnect_tc2 00:25:11.027 ************************************ 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.027 rmmod nvme_tcp 00:25:11.027 rmmod nvme_fabrics 00:25:11.027 rmmod nvme_keyring 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1229636 ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1229636 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1229636 ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1229636 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229636 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229636' 00:25:11.027 killing process with pid 1229636 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1229636 00:25:11.027 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1229636 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.284 17:14:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.814 17:14:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.814 00:25:13.814 real 0m16.390s 00:25:13.814 user 0m48.085s 00:25:13.814 sys 0m7.481s 00:25:13.814 17:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.814 17:14:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:13.814 ************************************ 00:25:13.814 END TEST nvmf_target_disconnect 00:25:13.814 ************************************ 00:25:13.814 17:14:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.814 17:14:12 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:13.814 17:14:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.814 17:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.814 17:14:12 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:13.814 00:25:13.814 real 19m17.682s 00:25:13.814 user 45m24.956s 00:25:13.814 sys 5m4.488s 00:25:13.814 17:14:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.814 17:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.814 ************************************ 00:25:13.814 END TEST nvmf_tcp 00:25:13.814 ************************************ 00:25:13.814 17:14:13 -- common/autotest_common.sh@1142 -- # return 0 00:25:13.814 17:14:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:13.814 17:14:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:13.814 17:14:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.814 17:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.814 17:14:13 -- common/autotest_common.sh@10 -- # set +x 00:25:13.814 ************************************ 00:25:13.814 START TEST spdkcli_nvmf_tcp 00:25:13.814 ************************************ 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:13.814 * Looking for test storage... 00:25:13.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:13.814 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1230820 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1230820 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1230820 ']' 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.815 [2024-07-12 17:14:13.167566] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:25:13.815 [2024-07-12 17:14:13.167651] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230820 ] 00:25:13.815 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.815 [2024-07-12 17:14:13.225973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.815 [2024-07-12 17:14:13.336922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.815 [2024-07-12 17:14:13.336927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.815 17:14:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:13.815 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:13.815 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:13.815 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:13.815 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:13.815 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:13.815 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:13.815 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.815 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.815 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:13.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:13.815 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:13.815 ' 00:25:16.369 [2024-07-12 17:14:16.032886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.739 [2024-07-12 17:14:17.265174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:20.265 [2024-07-12 17:14:19.524110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:22.162 [2024-07-12 17:14:21.470282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:23.531 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:23.532 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:23.532 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.532 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.532 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:23.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:23.532 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:23.532 17:14:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.788 17:14:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.045 17:14:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:24.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:24.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:24.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:24.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:24.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:24.045 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:24.045 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:24.045 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:24.045 ' 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:29.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:29.302 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:29.302 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:29.302 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1230820 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1230820 ']' 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1230820 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1230820 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1230820' 00:25:29.302 killing process with pid 1230820 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1230820 00:25:29.302 17:14:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1230820 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1230820 ']' 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1230820 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1230820 ']' 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1230820 00:25:29.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1230820) - No such process 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1230820 is not found' 00:25:29.560 Process with pid 1230820 is not found 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:29.560 00:25:29.560 real 0m16.007s 00:25:29.560 user 0m33.710s 00:25:29.560 sys 0m0.832s 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.560 17:14:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.560 ************************************ 00:25:29.560 END TEST spdkcli_nvmf_tcp 00:25:29.560 ************************************ 00:25:29.560 17:14:29 -- common/autotest_common.sh@1142 -- # return 0 00:25:29.560 17:14:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.560 17:14:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:29.560 17:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.560 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:25:29.560 ************************************ 00:25:29.560 START TEST nvmf_identify_passthru 00:25:29.560 ************************************ 00:25:29.561 17:14:29 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:29.561 * Looking for test storage... 00:25:29.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.561 17:14:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.561 17:14:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:29.561 17:14:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.561 17:14:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.561 17:14:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:29.561 17:14:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.561 17:14:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.561 17:14:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:32.090 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:32.090 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.090 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:32.091 Found net devices under 0000:84:00.0: cvl_0_0 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:32.091 Found net devices under 0000:84:00.1: cvl_0_1 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:25:32.091 00:25:32.091 --- 10.0.0.2 ping statistics --- 00:25:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.091 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:32.091 00:25:32.091 --- 10.0.0.1 ping statistics --- 00:25:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.091 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.091 17:14:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:25:32.091 17:14:31 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:32.091 17:14:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:32.091 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.269 17:14:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:25:36.270 17:14:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:36.270 17:14:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:36.270 17:14:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:36.270 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1235460 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.450 17:14:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1235460 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1235460 ']' 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.450 17:14:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.450 [2024-07-12 17:14:39.837331] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:25:40.450 [2024-07-12 17:14:39.837423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.450 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.450 [2024-07-12 17:14:39.901707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.450 [2024-07-12 17:14:40.009639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.450 [2024-07-12 17:14:40.009694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.450 [2024-07-12 17:14:40.009733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.450 [2024-07-12 17:14:40.009754] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.450 [2024-07-12 17:14:40.009776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.450 [2024-07-12 17:14:40.009829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.450 [2024-07-12 17:14:40.009879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.450 [2024-07-12 17:14:40.009928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.450 [2024-07-12 17:14:40.009931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:40.450 17:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.450 INFO: Log level set to 20 00:25:40.450 INFO: Requests: 00:25:40.450 { 00:25:40.450 "jsonrpc": "2.0", 00:25:40.450 "method": "nvmf_set_config", 00:25:40.450 "id": 1, 00:25:40.450 "params": { 00:25:40.450 "admin_cmd_passthru": { 00:25:40.450 "identify_ctrlr": true 00:25:40.450 } 00:25:40.450 } 00:25:40.450 } 00:25:40.450 00:25:40.450 INFO: response: 00:25:40.450 { 00:25:40.450 "jsonrpc": "2.0", 00:25:40.450 "id": 1, 00:25:40.450 "result": true 00:25:40.450 } 00:25:40.450 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.450 17:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.450 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.450 INFO: Setting log level to 20 00:25:40.450 INFO: Setting log level to 20 00:25:40.450 INFO: Log level set to 20 00:25:40.450 INFO: Log level set to 20 00:25:40.450 INFO: Requests: 00:25:40.450 { 00:25:40.450 "jsonrpc": "2.0", 00:25:40.450 "method": "framework_start_init", 00:25:40.450 "id": 1 00:25:40.450 } 00:25:40.450 00:25:40.450 INFO: Requests: 00:25:40.450 { 00:25:40.450 "jsonrpc": "2.0", 00:25:40.450 "method": "framework_start_init", 00:25:40.450 "id": 1 00:25:40.450 } 00:25:40.450 00:25:40.708 [2024-07-12 17:14:40.160116] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:40.708 INFO: response: 00:25:40.708 { 00:25:40.708 "jsonrpc": "2.0", 00:25:40.708 "id": 1, 00:25:40.708 "result": true 00:25:40.708 } 00:25:40.708 00:25:40.708 INFO: response: 00:25:40.708 { 00:25:40.708 "jsonrpc": "2.0", 00:25:40.708 "id": 1, 00:25:40.708 "result": true 00:25:40.708 } 00:25:40.708 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.708 17:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.708 INFO: Setting log level to 40 00:25:40.708 INFO: Setting log level to 40 00:25:40.708 INFO: Setting log level to 40 00:25:40.708 [2024-07-12 17:14:40.170242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.708 17:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.708 17:14:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.708 17:14:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 Nvme0n1 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 [2024-07-12 17:14:43.066045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.982 [ 00:25:43.982 { 00:25:43.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:43.982 "subtype": "Discovery", 00:25:43.982 "listen_addresses": [], 00:25:43.982 "allow_any_host": true, 00:25:43.982 "hosts": [] 00:25:43.982 }, 00:25:43.982 { 00:25:43.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.982 "subtype": "NVMe", 00:25:43.982 "listen_addresses": [ 00:25:43.982 { 00:25:43.982 "trtype": "TCP", 00:25:43.982 "adrfam": "IPv4", 00:25:43.982 "traddr": "10.0.0.2", 00:25:43.982 "trsvcid": "4420" 00:25:43.982 } 00:25:43.982 ], 00:25:43.982 "allow_any_host": true, 00:25:43.982 "hosts": [], 00:25:43.982 "serial_number": "SPDK00000000000001", 00:25:43.982 "model_number": "SPDK bdev Controller", 00:25:43.982 "max_namespaces": 1, 00:25:43.982 "min_cntlid": 1, 00:25:43.982 "max_cntlid": 65519, 00:25:43.982 "namespaces": [ 00:25:43.982 { 00:25:43.982 "nsid": 1, 00:25:43.982 "bdev_name": "Nvme0n1", 00:25:43.982 "name": "Nvme0n1", 00:25:43.982 "nguid": "55C89E403FFC4B3A94DA6038BAF4F65F", 00:25:43.982 "uuid": "55c89e40-3ffc-4b3a-94da-6038baf4f65f" 00:25:43.982 } 00:25:43.982 ] 00:25:43.982 } 00:25:43.982 ] 00:25:43.982 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:43.982 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:43.982 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:43.983 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:43.983 17:14:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.983 rmmod nvme_tcp 00:25:43.983 rmmod nvme_fabrics 00:25:43.983 rmmod nvme_keyring 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1235460 ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1235460 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1235460 ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1235460 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235460 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235460' 00:25:43.983 killing process with pid 1235460 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1235460 00:25:43.983 17:14:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1235460 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.878 17:14:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.878 17:14:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:45.878 17:14:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.777 17:14:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:47.777 00:25:47.777 real 0m18.029s 00:25:47.777 user 0m26.658s 00:25:47.777 sys 0m2.359s 00:25:47.777 17:14:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.777 17:14:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.777 ************************************ 00:25:47.777 END TEST nvmf_identify_passthru 00:25:47.777 ************************************ 00:25:47.777 17:14:47 -- common/autotest_common.sh@1142 -- # return 0 00:25:47.777 17:14:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.777 17:14:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:47.777 17:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.777 17:14:47 -- common/autotest_common.sh@10 -- # set +x 00:25:47.777 ************************************ 00:25:47.777 START TEST nvmf_dif 00:25:47.777 ************************************ 00:25:47.777 17:14:47 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:47.777 * Looking for test storage... 00:25:47.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.777 17:14:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.777 17:14:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.777 17:14:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.777 17:14:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.777 17:14:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.777 17:14:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.777 17:14:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:47.777 17:14:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:47.777 17:14:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.777 17:14:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.777 17:14:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.777 17:14:47 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.777 17:14:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:49.672 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:49.672 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.672 17:14:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:49.673 Found net devices under 0000:84:00.0: cvl_0_0 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:49.673 Found net devices under 0000:84:00.1: cvl_0_1 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.673 17:14:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.928 17:14:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.928 17:14:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:25:49.928 00:25:49.928 --- 10.0.0.2 ping statistics --- 00:25:49.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.928 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:49.928 17:14:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:25:49.929 00:25:49.929 --- 10.0.0.1 ping statistics --- 00:25:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.929 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:49.929 17:14:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.929 17:14:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:49.929 17:14:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:49.929 17:14:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:50.860 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.860 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:50.860 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.860 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.860 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.860 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.860 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.860 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.860 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:50.860 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:50.860 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:50.860 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:50.860 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:50.860 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:50.860 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:50.860 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:50.860 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.118 17:14:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:51.118 17:14:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1238638 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:51.118 17:14:50 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1238638 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1238638 ']' 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.118 17:14:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.118 [2024-07-12 17:14:50.776529] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:25:51.118 [2024-07-12 17:14:50.776599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.118 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.375 [2024-07-12 17:14:50.839625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.375 [2024-07-12 17:14:50.952436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.375 [2024-07-12 17:14:50.952501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.375 [2024-07-12 17:14:50.952525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.375 [2024-07-12 17:14:50.952537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.375 [2024-07-12 17:14:50.952547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.375 [2024-07-12 17:14:50.952578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.375 17:14:51 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.375 17:14:51 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:51.375 17:14:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.375 17:14:51 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.375 17:14:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 17:14:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.632 17:14:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:51.632 17:14:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 [2024-07-12 17:14:51.080822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.632 17:14:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.632 17:14:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 ************************************ 00:25:51.632 START TEST fio_dif_1_default 00:25:51.632 ************************************ 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 bdev_null0 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.632 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:51.633 [2024-07-12 17:14:51.137077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:51.633 { 00:25:51.633 "params": { 00:25:51.633 "name": "Nvme$subsystem", 00:25:51.633 "trtype": "$TEST_TRANSPORT", 00:25:51.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.633 "adrfam": "ipv4", 00:25:51.633 "trsvcid": "$NVMF_PORT", 00:25:51.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.633 "hdgst": ${hdgst:-false}, 00:25:51.633 "ddgst": ${ddgst:-false} 00:25:51.633 }, 00:25:51.633 "method": "bdev_nvme_attach_controller" 00:25:51.633 } 00:25:51.633 EOF 00:25:51.633 )") 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:51.633 "params": { 00:25:51.633 "name": "Nvme0", 00:25:51.633 "trtype": "tcp", 00:25:51.633 "traddr": "10.0.0.2", 00:25:51.633 "adrfam": "ipv4", 00:25:51.633 "trsvcid": "4420", 00:25:51.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:51.633 "hdgst": false, 00:25:51.633 "ddgst": false 00:25:51.633 }, 00:25:51.633 "method": "bdev_nvme_attach_controller" 00:25:51.633 }' 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:51.633 17:14:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.890 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:51.890 fio-3.35 00:25:51.890 Starting 1 thread 00:25:51.890 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.158 00:26:04.158 filename0: (groupid=0, jobs=1): err= 0: pid=1238863: Fri Jul 12 17:15:02 2024 00:26:04.158 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10014msec) 00:26:04.158 slat (nsec): min=3931, max=84934, avg=9664.28, stdev=4991.45 00:26:04.158 clat (usec): min=595, max=46234, avg=40674.83, stdev=3645.05 00:26:04.158 lat (usec): min=603, max=46249, avg=40684.49, stdev=3644.98 00:26:04.158 clat percentiles (usec): 00:26:04.158 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:26:04.158 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:04.158 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:04.158 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:26:04.159 | 99.99th=[46400] 00:26:04.159 bw ( KiB/s): min= 384, max= 416, per=99.73%, avg=392.00, stdev=14.22, samples=20 00:26:04.159 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:26:04.159 lat (usec) : 750=0.81% 00:26:04.159 lat (msec) : 50=99.19% 00:26:04.159 cpu : usr=89.58%, sys=10.16%, ctx=18, majf=0, minf=188 00:26:04.159 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:04.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.159 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.159 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:04.159 00:26:04.159 Run status group 0 (all jobs): 00:26:04.159 READ: bw=393KiB/s (402kB/s), 393KiB/s-393KiB/s (402kB/s-402kB/s), io=3936KiB (4030kB), run=10014-10014msec 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 00:26:04.159 real 0m11.245s 00:26:04.159 user 0m10.175s 00:26:04.159 sys 0m1.284s 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 ************************************ 00:26:04.159 END TEST fio_dif_1_default 00:26:04.159 ************************************ 00:26:04.159 17:15:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:04.159 17:15:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:04.159 17:15:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:04.159 17:15:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 ************************************ 00:26:04.159 START TEST fio_dif_1_multi_subsystems 00:26:04.159 ************************************ 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 bdev_null0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 [2024-07-12 17:15:02.420059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 bdev_null1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.159 { 00:26:04.159 "params": { 00:26:04.159 "name": "Nvme$subsystem", 00:26:04.159 "trtype": "$TEST_TRANSPORT", 00:26:04.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.159 "adrfam": "ipv4", 00:26:04.159 "trsvcid": "$NVMF_PORT", 00:26:04.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.159 "hdgst": ${hdgst:-false}, 00:26:04.159 "ddgst": ${ddgst:-false} 00:26:04.159 }, 00:26:04.159 "method": "bdev_nvme_attach_controller" 00:26:04.159 } 00:26:04.159 EOF 00:26:04.159 )") 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:04.159 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:04.159 { 00:26:04.159 "params": { 00:26:04.159 "name": "Nvme$subsystem", 00:26:04.159 "trtype": "$TEST_TRANSPORT", 00:26:04.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:04.159 "adrfam": "ipv4", 00:26:04.159 "trsvcid": "$NVMF_PORT", 00:26:04.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:04.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:04.159 "hdgst": ${hdgst:-false}, 00:26:04.159 "ddgst": ${ddgst:-false} 00:26:04.159 }, 00:26:04.159 "method": "bdev_nvme_attach_controller" 00:26:04.159 } 00:26:04.159 EOF 00:26:04.159 )") 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:04.160 "params": { 00:26:04.160 "name": "Nvme0", 00:26:04.160 "trtype": "tcp", 00:26:04.160 "traddr": "10.0.0.2", 00:26:04.160 "adrfam": "ipv4", 00:26:04.160 "trsvcid": "4420", 00:26:04.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:04.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:04.160 "hdgst": false, 00:26:04.160 "ddgst": false 00:26:04.160 }, 00:26:04.160 "method": "bdev_nvme_attach_controller" 00:26:04.160 },{ 00:26:04.160 "params": { 00:26:04.160 "name": "Nvme1", 00:26:04.160 "trtype": "tcp", 00:26:04.160 "traddr": "10.0.0.2", 00:26:04.160 "adrfam": "ipv4", 00:26:04.160 "trsvcid": "4420", 00:26:04.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.160 "hdgst": false, 00:26:04.160 "ddgst": false 00:26:04.160 }, 00:26:04.160 "method": "bdev_nvme_attach_controller" 00:26:04.160 }' 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:04.160 17:15:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:04.160 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:04.160 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:04.160 fio-3.35 00:26:04.160 Starting 2 threads 00:26:04.160 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.121 00:26:14.121 filename0: (groupid=0, jobs=1): err= 0: pid=1240379: Fri Jul 12 17:15:13 2024 00:26:14.121 read: IOPS=141, BW=568KiB/s (581kB/s)(5696KiB/10034msec) 00:26:14.121 slat (nsec): min=8074, max=41520, avg=9717.39, stdev=2684.11 00:26:14.121 clat (usec): min=503, max=44249, avg=28153.50, stdev=19399.43 00:26:14.121 lat (usec): min=512, max=44278, avg=28163.21, stdev=19399.44 00:26:14.121 clat percentiles (usec): 00:26:14.121 | 1.00th=[ 537], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 627], 00:26:14.121 | 30.00th=[ 693], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:26:14.121 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:14.121 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:26:14.121 | 99.99th=[44303] 00:26:14.121 bw ( KiB/s): min= 352, max= 768, per=42.87%, avg=568.00, stdev=185.40, samples=20 00:26:14.121 iops : min= 88, max= 192, avg=142.00, stdev=46.35, samples=20 00:26:14.121 lat (usec) : 750=32.02%, 1000=1.12% 00:26:14.121 lat (msec) : 50=66.85% 00:26:14.121 cpu : usr=93.91%, sys=5.78%, ctx=15, majf=0, minf=140 00:26:14.121 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.121 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.121 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:14.121 filename1: (groupid=0, jobs=1): err= 0: pid=1240380: Fri Jul 12 17:15:13 2024 00:26:14.121 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10036msec) 00:26:14.121 slat (nsec): min=7824, max=35152, avg=9748.09, stdev=2912.71 00:26:14.121 clat (usec): min=517, max=42548, avg=21096.76, stdev=20499.18 00:26:14.121 lat (usec): min=525, max=42562, avg=21106.51, stdev=20498.96 00:26:14.121 clat percentiles (usec): 00:26:14.122 | 1.00th=[ 545], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 603], 00:26:14.122 | 30.00th=[ 644], 40.00th=[ 693], 50.00th=[ 2278], 60.00th=[41157], 00:26:14.122 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:14.122 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:26:14.122 | 99.99th=[42730] 00:26:14.122 bw ( KiB/s): min= 704, max= 768, per=57.21%, avg=758.40, stdev=21.02, samples=20 00:26:14.122 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:26:14.122 lat (usec) : 750=45.89%, 1000=3.79% 00:26:14.122 lat (msec) : 2=0.21%, 4=0.21%, 50=49.89% 00:26:14.122 cpu : usr=93.89%, sys=5.78%, ctx=24, majf=0, minf=135 00:26:14.122 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.122 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.122 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:14.122 00:26:14.122 Run status group 0 (all jobs): 00:26:14.122 READ: bw=1325KiB/s (1357kB/s), 568KiB/s-757KiB/s (581kB/s-775kB/s), io=13.0MiB (13.6MB), run=10034-10036msec 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.122 00:26:14.122 real 0m11.404s 00:26:14.122 user 0m20.100s 00:26:14.122 sys 0m1.493s 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.122 17:15:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:14.122 ************************************ 00:26:14.122 END TEST fio_dif_1_multi_subsystems 00:26:14.122 ************************************ 00:26:14.379 17:15:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:14.379 17:15:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:14.379 17:15:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:14.379 17:15:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.379 17:15:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:14.379 ************************************ 00:26:14.379 START TEST fio_dif_rand_params 00:26:14.380 ************************************ 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.380 bdev_null0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:14.380 [2024-07-12 17:15:13.881474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:14.380 { 00:26:14.380 "params": { 00:26:14.380 "name": "Nvme$subsystem", 00:26:14.380 "trtype": "$TEST_TRANSPORT", 00:26:14.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.380 "adrfam": "ipv4", 00:26:14.380 "trsvcid": "$NVMF_PORT", 00:26:14.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.380 "hdgst": ${hdgst:-false}, 00:26:14.380 "ddgst": ${ddgst:-false} 00:26:14.380 }, 00:26:14.380 "method": "bdev_nvme_attach_controller" 00:26:14.380 } 00:26:14.380 EOF 00:26:14.380 )") 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:14.380 "params": { 00:26:14.380 "name": "Nvme0", 00:26:14.380 "trtype": "tcp", 00:26:14.380 "traddr": "10.0.0.2", 00:26:14.380 "adrfam": "ipv4", 00:26:14.380 "trsvcid": "4420", 00:26:14.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:14.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:14.380 "hdgst": false, 00:26:14.380 "ddgst": false 00:26:14.380 }, 00:26:14.380 "method": "bdev_nvme_attach_controller" 00:26:14.380 }' 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:14.380 17:15:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.637 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:14.637 ... 00:26:14.637 fio-3.35 00:26:14.637 Starting 3 threads 00:26:14.637 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.189 00:26:21.189 filename0: (groupid=0, jobs=1): err= 0: pid=1242286: Fri Jul 12 17:15:19 2024 00:26:21.189 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(148MiB/5004msec) 00:26:21.189 slat (nsec): min=4580, max=81564, avg=13695.67, stdev=4082.72 00:26:21.189 clat (usec): min=4540, max=91180, avg=12637.60, stdev=7202.41 00:26:21.189 lat (usec): min=4552, max=91192, avg=12651.30, stdev=7202.21 00:26:21.189 clat percentiles (usec): 00:26:21.189 | 1.00th=[ 5473], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[10028], 00:26:21.189 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:26:21.189 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14746], 95.00th=[15926], 00:26:21.189 | 99.00th=[50594], 99.50th=[52691], 99.90th=[90702], 99.95th=[90702], 00:26:21.189 | 99.99th=[90702] 00:26:21.189 bw ( KiB/s): min=22016, max=36096, per=35.85%, avg=30310.40, stdev=3704.29, samples=10 00:26:21.189 iops : min= 172, max= 282, avg=236.80, stdev=28.94, samples=10 00:26:21.189 lat (msec) : 10=19.48%, 20=77.99%, 50=1.35%, 100=1.18% 00:26:21.189 cpu : usr=90.45%, sys=9.07%, ctx=12, majf=0, minf=130 00:26:21.189 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.189 issued rwts: total=1186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.189 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.189 filename0: (groupid=0, jobs=1): err= 0: pid=1242287: Fri Jul 12 17:15:19 2024 00:26:21.189 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5005msec) 00:26:21.189 slat (nsec): min=4197, max=37276, avg=13530.98, stdev=3478.83 00:26:21.189 clat (usec): min=4647, max=93374, avg=14334.66, stdev=9534.73 00:26:21.189 lat (usec): min=4659, max=93387, avg=14348.19, stdev=9534.66 00:26:21.189 clat percentiles (usec): 00:26:21.189 | 1.00th=[ 6063], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11076], 00:26:21.189 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:26:21.189 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15664], 95.00th=[45876], 00:26:21.189 | 99.00th=[52691], 99.50th=[54264], 99.90th=[92799], 99.95th=[93848], 00:26:21.189 | 99.99th=[93848] 00:26:21.189 bw ( KiB/s): min=16640, max=31232, per=31.58%, avg=26700.80, stdev=4879.77, samples=10 00:26:21.189 iops : min= 130, max= 244, avg=208.60, stdev=38.12, samples=10 00:26:21.189 lat (msec) : 10=10.61%, 20=84.32%, 50=2.20%, 100=2.87% 00:26:21.189 cpu : usr=90.63%, sys=8.91%, ctx=10, majf=0, minf=87 00:26:21.189 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.189 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.189 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.189 filename0: (groupid=0, jobs=1): err= 0: pid=1242288: Fri Jul 12 17:15:19 2024 00:26:21.189 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(138MiB/5044msec) 00:26:21.189 slat (nsec): min=4186, max=39440, avg=13352.03, stdev=3287.50 00:26:21.189 clat (usec): min=4621, max=93993, avg=13698.14, stdev=8876.10 00:26:21.189 lat (usec): min=4632, max=94001, avg=13711.49, stdev=8876.07 00:26:21.189 clat percentiles (usec): 00:26:21.189 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 8029], 20.00th=[ 9896], 00:26:21.190 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:26:21.190 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16057], 95.00th=[17171], 00:26:21.190 | 99.00th=[52691], 99.50th=[86508], 99.90th=[93848], 99.95th=[93848], 00:26:21.190 | 99.99th=[93848] 00:26:21.190 bw ( KiB/s): min=20264, max=35328, per=33.25%, avg=28112.80, stdev=3600.21, samples=10 00:26:21.190 iops : min= 158, max= 276, avg=219.60, stdev=28.20, samples=10 00:26:21.190 lat (msec) : 10=20.36%, 20=76.45%, 50=1.00%, 100=2.18% 00:26:21.190 cpu : usr=90.18%, sys=9.34%, ctx=14, majf=0, minf=91 00:26:21.190 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.190 issued rwts: total=1100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.190 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.190 00:26:21.190 Run status group 0 (all jobs): 00:26:21.190 READ: bw=82.6MiB/s (86.6MB/s), 26.1MiB/s-29.6MiB/s (27.4MB/s-31.1MB/s), io=417MiB (437MB), run=5004-5044msec 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 bdev_null0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 [2024-07-12 17:15:19.980283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 bdev_null1 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:21.190 17:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.190 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.191 bdev_null2 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.191 { 00:26:21.191 "params": { 00:26:21.191 "name": "Nvme$subsystem", 00:26:21.191 "trtype": "$TEST_TRANSPORT", 00:26:21.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.191 "adrfam": "ipv4", 00:26:21.191 "trsvcid": "$NVMF_PORT", 00:26:21.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.191 "hdgst": ${hdgst:-false}, 00:26:21.191 "ddgst": ${ddgst:-false} 00:26:21.191 }, 00:26:21.191 "method": "bdev_nvme_attach_controller" 00:26:21.191 } 00:26:21.191 EOF 00:26:21.191 )") 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.191 { 00:26:21.191 "params": { 00:26:21.191 "name": "Nvme$subsystem", 00:26:21.191 "trtype": "$TEST_TRANSPORT", 00:26:21.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.191 "adrfam": "ipv4", 00:26:21.191 "trsvcid": "$NVMF_PORT", 00:26:21.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.191 "hdgst": ${hdgst:-false}, 00:26:21.191 "ddgst": ${ddgst:-false} 00:26:21.191 }, 00:26:21.191 "method": "bdev_nvme_attach_controller" 00:26:21.191 } 00:26:21.191 EOF 00:26:21.191 )") 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.191 { 00:26:21.191 "params": { 00:26:21.191 "name": "Nvme$subsystem", 00:26:21.191 "trtype": "$TEST_TRANSPORT", 00:26:21.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.191 "adrfam": "ipv4", 00:26:21.191 "trsvcid": "$NVMF_PORT", 00:26:21.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.191 "hdgst": ${hdgst:-false}, 00:26:21.191 "ddgst": ${ddgst:-false} 00:26:21.191 }, 00:26:21.191 "method": "bdev_nvme_attach_controller" 00:26:21.191 } 00:26:21.191 EOF 00:26:21.191 )") 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:21.191 17:15:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:21.191 "params": { 00:26:21.191 "name": "Nvme0", 00:26:21.191 "trtype": "tcp", 00:26:21.191 "traddr": "10.0.0.2", 00:26:21.191 "adrfam": "ipv4", 00:26:21.191 "trsvcid": "4420", 00:26:21.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:21.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:21.191 "hdgst": false, 00:26:21.192 "ddgst": false 00:26:21.192 }, 00:26:21.192 "method": "bdev_nvme_attach_controller" 00:26:21.192 },{ 00:26:21.192 "params": { 00:26:21.192 "name": "Nvme1", 00:26:21.192 "trtype": "tcp", 00:26:21.192 "traddr": "10.0.0.2", 00:26:21.192 "adrfam": "ipv4", 00:26:21.192 "trsvcid": "4420", 00:26:21.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.192 "hdgst": false, 00:26:21.192 "ddgst": false 00:26:21.192 }, 00:26:21.192 "method": "bdev_nvme_attach_controller" 00:26:21.192 },{ 00:26:21.192 "params": { 00:26:21.192 "name": "Nvme2", 00:26:21.192 "trtype": "tcp", 00:26:21.192 "traddr": "10.0.0.2", 00:26:21.192 "adrfam": "ipv4", 00:26:21.192 "trsvcid": "4420", 00:26:21.192 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.192 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.192 "hdgst": false, 00:26:21.192 "ddgst": false 00:26:21.192 }, 00:26:21.192 "method": "bdev_nvme_attach_controller" 00:26:21.192 }' 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:21.192 17:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.192 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.192 ... 00:26:21.192 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.192 ... 00:26:21.192 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.192 ... 00:26:21.192 fio-3.35 00:26:21.192 Starting 24 threads 00:26:21.192 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.389 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243151: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.88MiB/10090msec) 00:26:33.389 slat (nsec): min=8337, max=65370, avg=21961.03, stdev=10862.02 00:26:33.389 clat (msec): min=12, max=327, avg=63.67, stdev=74.98 00:26:33.389 lat (msec): min=12, max=327, avg=63.69, stdev=74.98 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 253], 00:26:33.389 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 326], 00:26:33.389 | 99.99th=[ 326] 00:26:33.389 bw ( KiB/s): min= 240, max= 2048, per=4.29%, avg=1004.80, stdev=850.37, samples=20 00:26:33.389 iops : min= 60, max= 512, avg=251.20, stdev=212.59, samples=20 00:26:33.389 lat (msec) : 20=0.63%, 50=84.81%, 250=5.54%, 500=9.02% 00:26:33.389 cpu : usr=97.29%, sys=1.75%, ctx=60, majf=0, minf=63 00:26:33.389 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243152: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=243, BW=973KiB/s (996kB/s)(9792KiB/10068msec) 00:26:33.389 slat (nsec): min=3860, max=65941, avg=30704.37, stdev=10408.62 00:26:33.389 clat (msec): min=26, max=360, avg=65.25, stdev=80.63 00:26:33.389 lat (msec): min=26, max=360, avg=65.28, stdev=80.62 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 262], 00:26:33.389 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:26:33.389 | 99.99th=[ 359] 00:26:33.389 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=972.80, stdev=842.21, samples=20 00:26:33.389 iops : min= 32, max= 480, avg=243.20, stdev=210.55, samples=20 00:26:33.389 lat (msec) : 50=85.62%, 100=0.65%, 250=3.84%, 500=9.89% 00:26:33.389 cpu : usr=98.21%, sys=1.36%, ctx=15, majf=0, minf=42 00:26:33.389 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243153: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.88MiB/10091msec) 00:26:33.389 slat (nsec): min=5706, max=52162, avg=21543.93, stdev=9976.40 00:26:33.389 clat (msec): min=9, max=371, avg=63.56, stdev=76.02 00:26:33.389 lat (msec): min=9, max=371, avg=63.59, stdev=76.02 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 257], 00:26:33.389 | 99.00th=[ 266], 99.50th=[ 330], 99.90th=[ 372], 99.95th=[ 372], 00:26:33.389 | 99.99th=[ 372] 00:26:33.389 bw ( KiB/s): min= 224, max= 2048, per=4.30%, avg=1005.60, stdev=850.72, samples=20 00:26:33.389 iops : min= 56, max= 512, avg=251.40, stdev=212.68, samples=20 00:26:33.389 lat (msec) : 10=0.43%, 20=0.83%, 50=84.11%, 250=3.64%, 500=10.99% 00:26:33.389 cpu : usr=98.23%, sys=1.36%, ctx=20, majf=0, minf=38 00:26:33.389 IO depths : 1=5.3%, 2=10.6%, 4=22.6%, 8=54.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243154: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=248, BW=994KiB/s (1018kB/s)(9.80MiB/10089msec) 00:26:33.389 slat (nsec): min=4747, max=76955, avg=25600.28, stdev=13029.38 00:26:33.389 clat (msec): min=9, max=384, avg=63.96, stdev=78.54 00:26:33.389 lat (msec): min=9, max=384, avg=63.99, stdev=78.54 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 259], 00:26:33.389 | 99.00th=[ 305], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:26:33.389 | 99.99th=[ 384] 00:26:33.389 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=996.80, stdev=857.36, samples=20 00:26:33.389 iops : min= 32, max= 480, avg=249.20, stdev=214.34, samples=20 00:26:33.389 lat (msec) : 10=0.08%, 20=0.56%, 50=85.49%, 250=4.03%, 500=9.85% 00:26:33.389 cpu : usr=98.12%, sys=1.42%, ctx=15, majf=0, minf=35 00:26:33.389 IO depths : 1=5.5%, 2=11.1%, 4=23.2%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243155: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=242, BW=972KiB/s (995kB/s)(9792KiB/10077msec) 00:26:33.389 slat (nsec): min=8452, max=74119, avg=32008.90, stdev=11585.85 00:26:33.389 clat (msec): min=32, max=407, avg=65.43, stdev=84.30 00:26:33.389 lat (msec): min=32, max=407, avg=65.47, stdev=84.29 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 262], 00:26:33.389 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 409], 00:26:33.389 | 99.99th=[ 409] 00:26:33.389 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=972.80, stdev=854.29, samples=20 00:26:33.389 iops : min= 32, max= 480, avg=243.20, stdev=213.57, samples=20 00:26:33.389 lat (msec) : 50=86.27%, 100=0.65%, 250=2.86%, 500=10.21% 00:26:33.389 cpu : usr=98.42%, sys=1.17%, ctx=16, majf=0, minf=33 00:26:33.389 IO depths : 1=5.7%, 2=11.5%, 4=23.8%, 8=52.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.389 filename0: (groupid=0, jobs=1): err= 0: pid=1243156: Fri Jul 12 17:15:31 2024 00:26:33.389 read: IOPS=246, BW=987KiB/s (1010kB/s)(9944KiB/10077msec) 00:26:33.389 slat (usec): min=8, max=110, avg=47.40, stdev=26.52 00:26:33.389 clat (msec): min=21, max=367, avg=64.18, stdev=77.46 00:26:33.389 lat (msec): min=21, max=367, avg=64.22, stdev=77.44 00:26:33.389 clat percentiles (msec): 00:26:33.389 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.389 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.389 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 255], 00:26:33.389 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:26:33.389 | 99.99th=[ 368] 00:26:33.389 bw ( KiB/s): min= 176, max= 1920, per=4.23%, avg=988.00, stdev=839.51, samples=20 00:26:33.389 iops : min= 44, max= 480, avg=247.00, stdev=209.88, samples=20 00:26:33.389 lat (msec) : 50=85.60%, 250=4.02%, 500=10.38% 00:26:33.389 cpu : usr=98.50%, sys=1.06%, ctx=14, majf=0, minf=42 00:26:33.389 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:33.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.389 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename0: (groupid=0, jobs=1): err= 0: pid=1243157: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=243, BW=972KiB/s (995kB/s)(9792KiB/10073msec) 00:26:33.390 slat (nsec): min=3768, max=66913, avg=31458.13, stdev=11762.58 00:26:33.390 clat (msec): min=20, max=506, avg=65.27, stdev=83.84 00:26:33.390 lat (msec): min=20, max=506, avg=65.30, stdev=83.83 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 262], 00:26:33.390 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 430], 99.95th=[ 506], 00:26:33.390 | 99.99th=[ 506] 00:26:33.390 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=972.95, stdev=854.56, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=243.20, stdev=213.60, samples=20 00:26:33.390 lat (msec) : 50=86.27%, 100=0.65%, 250=2.94%, 500=10.05%, 750=0.08% 00:26:33.390 cpu : usr=98.42%, sys=1.17%, ctx=13, majf=0, minf=35 00:26:33.390 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename0: (groupid=0, jobs=1): err= 0: pid=1243158: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=246, BW=984KiB/s (1008kB/s)(9920KiB/10079msec) 00:26:33.390 slat (nsec): min=8273, max=65568, avg=30573.58, stdev=11541.17 00:26:33.390 clat (msec): min=26, max=343, avg=64.72, stdev=78.75 00:26:33.390 lat (msec): min=26, max=343, avg=64.75, stdev=78.74 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 257], 00:26:33.390 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 342], 00:26:33.390 | 99.99th=[ 342] 00:26:33.390 bw ( KiB/s): min= 144, max= 2048, per=4.21%, avg=985.60, stdev=842.93, samples=20 00:26:33.390 iops : min= 36, max= 512, avg=246.40, stdev=210.73, samples=20 00:26:33.390 lat (msec) : 50=85.16%, 100=1.05%, 250=2.54%, 500=11.25% 00:26:33.390 cpu : usr=98.31%, sys=1.27%, ctx=15, majf=0, minf=49 00:26:33.390 IO depths : 1=5.4%, 2=11.2%, 4=23.7%, 8=52.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243159: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=248, BW=996KiB/s (1020kB/s)(9.81MiB/10089msec) 00:26:33.390 slat (nsec): min=4233, max=65103, avg=15715.07, stdev=9352.29 00:26:33.390 clat (msec): min=9, max=434, avg=63.86, stdev=77.22 00:26:33.390 lat (msec): min=9, max=434, avg=63.88, stdev=77.22 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 257], 00:26:33.390 | 99.00th=[ 266], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 435], 00:26:33.390 | 99.99th=[ 435] 00:26:33.390 bw ( KiB/s): min= 128, max= 1920, per=4.27%, avg=998.40, stdev=856.50, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=249.60, stdev=214.13, samples=20 00:26:33.390 lat (msec) : 10=0.08%, 20=0.56%, 50=85.35%, 250=5.02%, 500=9.00% 00:26:33.390 cpu : usr=98.12%, sys=1.48%, ctx=15, majf=0, minf=42 00:26:33.390 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243160: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=252, BW=1008KiB/s (1033kB/s)(9.94MiB/10091msec) 00:26:33.390 slat (nsec): min=3963, max=50814, avg=16889.75, stdev=8494.29 00:26:33.390 clat (msec): min=5, max=304, avg=63.06, stdev=74.84 00:26:33.390 lat (msec): min=5, max=304, avg=63.08, stdev=74.83 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 255], 00:26:33.390 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 305], 00:26:33.390 | 99.99th=[ 305] 00:26:33.390 bw ( KiB/s): min= 144, max= 2176, per=4.32%, avg=1011.20, stdev=860.82, samples=20 00:26:33.390 iops : min= 36, max= 544, avg=252.80, stdev=215.21, samples=20 00:26:33.390 lat (msec) : 10=1.22%, 20=0.67%, 50=83.65%, 250=5.50%, 500=8.96% 00:26:33.390 cpu : usr=98.17%, sys=1.42%, ctx=14, majf=0, minf=43 00:26:33.390 IO depths : 1=5.3%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243161: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=235, BW=941KiB/s (964kB/s)(9472KiB/10064msec) 00:26:33.390 slat (nsec): min=8458, max=69623, avg=31118.30, stdev=11485.45 00:26:33.390 clat (msec): min=32, max=519, avg=67.70, stdev=100.64 00:26:33.390 lat (msec): min=32, max=519, avg=67.73, stdev=100.64 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 368], 00:26:33.390 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 518], 99.95th=[ 518], 00:26:33.390 | 99.99th=[ 518] 00:26:33.390 bw ( KiB/s): min= 128, max= 1920, per=4.02%, avg=940.95, stdev=872.51, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=235.20, stdev=218.09, samples=20 00:26:33.390 lat (msec) : 50=88.51%, 100=0.68%, 250=0.84%, 500=9.71%, 750=0.25% 00:26:33.390 cpu : usr=97.86%, sys=1.52%, ctx=42, majf=0, minf=34 00:26:33.390 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243162: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=235, BW=941KiB/s (963kB/s)(9464KiB/10062msec) 00:26:33.390 slat (usec): min=8, max=100, avg=47.06, stdev=24.14 00:26:33.390 clat (msec): min=31, max=506, avg=67.57, stdev=100.84 00:26:33.390 lat (msec): min=31, max=506, avg=67.61, stdev=100.83 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 262], 95.00th=[ 368], 00:26:33.390 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 506], 99.95th=[ 506], 00:26:33.390 | 99.99th=[ 506] 00:26:33.390 bw ( KiB/s): min= 128, max= 1923, per=4.02%, avg=940.30, stdev=873.40, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=235.00, stdev=218.26, samples=20 00:26:33.390 lat (msec) : 50=88.59%, 100=0.68%, 250=0.59%, 500=9.89%, 750=0.25% 00:26:33.390 cpu : usr=98.12%, sys=1.46%, ctx=13, majf=0, minf=49 00:26:33.390 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243163: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=242, BW=970KiB/s (993kB/s)(9776KiB/10077msec) 00:26:33.390 slat (nsec): min=8433, max=65853, avg=32058.72, stdev=11322.02 00:26:33.390 clat (msec): min=32, max=391, avg=65.51, stdev=85.12 00:26:33.390 lat (msec): min=32, max=391, avg=65.54, stdev=85.12 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 266], 00:26:33.390 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 393], 99.95th=[ 393], 00:26:33.390 | 99.99th=[ 393] 00:26:33.390 bw ( KiB/s): min= 128, max= 1920, per=4.15%, avg=971.20, stdev=855.76, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=242.80, stdev=213.94, samples=20 00:26:33.390 lat (msec) : 50=86.42%, 100=0.65%, 250=2.78%, 500=10.15% 00:26:33.390 cpu : usr=97.01%, sys=1.85%, ctx=112, majf=0, minf=25 00:26:33.390 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.390 filename1: (groupid=0, jobs=1): err= 0: pid=1243164: Fri Jul 12 17:15:31 2024 00:26:33.390 read: IOPS=243, BW=974KiB/s (997kB/s)(9816KiB/10077msec) 00:26:33.390 slat (usec): min=8, max=123, avg=50.48, stdev=24.34 00:26:33.390 clat (msec): min=31, max=414, avg=64.94, stdev=82.40 00:26:33.390 lat (msec): min=31, max=414, avg=64.99, stdev=82.39 00:26:33.390 clat percentiles (msec): 00:26:33.390 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.390 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.390 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 253], 95.00th=[ 257], 00:26:33.390 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 414], 00:26:33.390 | 99.99th=[ 414] 00:26:33.390 bw ( KiB/s): min= 128, max= 1920, per=4.19%, avg=979.20, stdev=848.21, samples=20 00:26:33.390 iops : min= 32, max= 480, avg=244.80, stdev=212.05, samples=20 00:26:33.390 lat (msec) : 50=86.06%, 100=0.65%, 250=1.87%, 500=11.41% 00:26:33.390 cpu : usr=97.70%, sys=1.65%, ctx=48, majf=0, minf=38 00:26:33.390 IO depths : 1=5.5%, 2=11.2%, 4=23.3%, 8=52.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:33.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.390 issued rwts: total=2454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename1: (groupid=0, jobs=1): err= 0: pid=1243165: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=243, BW=975KiB/s (998kB/s)(9816KiB/10068msec) 00:26:33.391 slat (nsec): min=5361, max=50313, avg=24335.00, stdev=10455.05 00:26:33.391 clat (msec): min=32, max=427, avg=65.35, stdev=82.02 00:26:33.391 lat (msec): min=32, max=427, avg=65.38, stdev=82.02 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 262], 00:26:33.391 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 426], 00:26:33.391 | 99.99th=[ 430] 00:26:33.391 bw ( KiB/s): min= 128, max= 1920, per=4.17%, avg=975.20, stdev=838.77, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=243.80, stdev=209.69, samples=20 00:26:33.391 lat (msec) : 50=85.41%, 100=0.90%, 250=3.02%, 500=10.68% 00:26:33.391 cpu : usr=97.64%, sys=1.68%, ctx=80, majf=0, minf=40 00:26:33.391 IO depths : 1=5.7%, 2=11.9%, 4=24.6%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename1: (groupid=0, jobs=1): err= 0: pid=1243166: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=236, BW=946KiB/s (968kB/s)(9472KiB/10017msec) 00:26:33.391 slat (nsec): min=5150, max=61527, avg=30871.44, stdev=10602.00 00:26:33.391 clat (msec): min=32, max=521, avg=67.40, stdev=100.14 00:26:33.391 lat (msec): min=32, max=521, avg=67.43, stdev=100.13 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 372], 00:26:33.391 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 514], 99.95th=[ 523], 00:26:33.391 | 99.99th=[ 523] 00:26:33.391 bw ( KiB/s): min= 112, max= 1920, per=4.02%, avg=940.80, stdev=872.39, samples=20 00:26:33.391 iops : min= 28, max= 480, avg=235.20, stdev=218.10, samples=20 00:26:33.391 lat (msec) : 50=88.51%, 100=0.68%, 250=1.52%, 500=8.87%, 750=0.42% 00:26:33.391 cpu : usr=98.49%, sys=1.10%, ctx=13, majf=0, minf=28 00:26:33.391 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243167: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=247, BW=991KiB/s (1015kB/s)(9984KiB/10077msec) 00:26:33.391 slat (usec): min=7, max=106, avg=31.62, stdev=18.57 00:26:33.391 clat (msec): min=26, max=275, avg=64.32, stdev=75.17 00:26:33.391 lat (msec): min=26, max=275, avg=64.35, stdev=75.16 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 249], 95.00th=[ 255], 00:26:33.391 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:26:33.391 | 99.99th=[ 275] 00:26:33.391 bw ( KiB/s): min= 256, max= 2048, per=4.24%, avg=992.00, stdev=836.65, samples=20 00:26:33.391 iops : min= 64, max= 512, avg=248.00, stdev=209.16, samples=20 00:26:33.391 lat (msec) : 50=84.62%, 100=0.64%, 250=5.77%, 500=8.97% 00:26:33.391 cpu : usr=97.73%, sys=1.52%, ctx=48, majf=0, minf=47 00:26:33.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243168: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=243, BW=973KiB/s (996kB/s)(9792KiB/10064msec) 00:26:33.391 slat (nsec): min=5270, max=64573, avg=30567.91, stdev=10876.31 00:26:33.391 clat (msec): min=26, max=483, avg=65.23, stdev=81.09 00:26:33.391 lat (msec): min=26, max=483, avg=65.26, stdev=81.09 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 262], 00:26:33.391 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 485], 00:26:33.391 | 99.99th=[ 485] 00:26:33.391 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=972.80, stdev=842.21, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=243.20, stdev=210.55, samples=20 00:26:33.391 lat (msec) : 50=85.62%, 100=0.65%, 250=4.66%, 500=9.07% 00:26:33.391 cpu : usr=97.18%, sys=1.92%, ctx=224, majf=0, minf=42 00:26:33.391 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243169: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=234, BW=940KiB/s (963kB/s)(9472KiB/10077msec) 00:26:33.391 slat (usec): min=9, max=114, avg=39.05, stdev=18.00 00:26:33.391 clat (msec): min=20, max=508, avg=67.46, stdev=102.26 00:26:33.391 lat (msec): min=20, max=508, avg=67.50, stdev=102.27 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 372], 00:26:33.391 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 498], 99.95th=[ 510], 00:26:33.391 | 99.99th=[ 510] 00:26:33.391 bw ( KiB/s): min= 128, max= 1920, per=4.02%, avg=940.80, stdev=884.16, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=235.20, stdev=221.04, samples=20 00:26:33.391 lat (msec) : 50=89.19%, 100=0.68%, 250=0.17%, 500=9.88%, 750=0.08% 00:26:33.391 cpu : usr=98.06%, sys=1.44%, ctx=19, majf=0, minf=49 00:26:33.391 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243170: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=247, BW=991KiB/s (1015kB/s)(9.77MiB/10087msec) 00:26:33.391 slat (nsec): min=6873, max=63366, avg=29627.78, stdev=11448.60 00:26:33.391 clat (msec): min=15, max=387, avg=64.00, stdev=79.83 00:26:33.391 lat (msec): min=15, max=387, avg=64.03, stdev=79.82 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 234], 95.00th=[ 257], 00:26:33.391 | 99.00th=[ 363], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 388], 00:26:33.391 | 99.99th=[ 388] 00:26:33.391 bw ( KiB/s): min= 128, max= 1923, per=4.25%, avg=993.75, stdev=860.59, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=248.40, stdev=215.10, samples=20 00:26:33.391 lat (msec) : 20=0.64%, 50=85.76%, 250=4.08%, 500=9.52% 00:26:33.391 cpu : usr=97.42%, sys=1.75%, ctx=81, majf=0, minf=35 00:26:33.391 IO depths : 1=5.5%, 2=11.0%, 4=22.7%, 8=53.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243171: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=242, BW=970KiB/s (993kB/s)(9760KiB/10067msec) 00:26:33.391 slat (usec): min=5, max=108, avg=25.18, stdev=14.24 00:26:33.391 clat (msec): min=32, max=394, avg=65.53, stdev=83.90 00:26:33.391 lat (msec): min=32, max=394, avg=65.55, stdev=83.90 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 264], 00:26:33.391 | 99.00th=[ 359], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 397], 00:26:33.391 | 99.99th=[ 397] 00:26:33.391 bw ( KiB/s): min= 128, max= 1920, per=4.14%, avg=969.60, stdev=844.02, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=242.40, stdev=211.00, samples=20 00:26:33.391 lat (msec) : 50=85.90%, 100=0.66%, 250=3.20%, 500=10.25% 00:26:33.391 cpu : usr=97.03%, sys=1.88%, ctx=180, majf=0, minf=40 00:26:33.391 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.391 filename2: (groupid=0, jobs=1): err= 0: pid=1243172: Fri Jul 12 17:15:31 2024 00:26:33.391 read: IOPS=235, BW=940KiB/s (963kB/s)(9464KiB/10065msec) 00:26:33.391 slat (usec): min=9, max=112, avg=39.27, stdev=18.94 00:26:33.391 clat (msec): min=31, max=503, avg=67.66, stdev=100.55 00:26:33.391 lat (msec): min=31, max=503, avg=67.70, stdev=100.56 00:26:33.391 clat percentiles (msec): 00:26:33.391 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.391 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.391 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 264], 95.00th=[ 368], 00:26:33.391 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 506], 99.95th=[ 506], 00:26:33.391 | 99.99th=[ 506] 00:26:33.391 bw ( KiB/s): min= 128, max= 1920, per=4.02%, avg=940.15, stdev=873.07, samples=20 00:26:33.391 iops : min= 32, max= 480, avg=235.00, stdev=218.23, samples=20 00:26:33.391 lat (msec) : 50=88.59%, 100=0.68%, 250=0.59%, 500=9.97%, 750=0.17% 00:26:33.391 cpu : usr=97.65%, sys=1.56%, ctx=64, majf=0, minf=37 00:26:33.391 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:33.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.391 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.392 filename2: (groupid=0, jobs=1): err= 0: pid=1243173: Fri Jul 12 17:15:31 2024 00:26:33.392 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.88MiB/10090msec) 00:26:33.392 slat (nsec): min=8345, max=80090, avg=24024.15, stdev=10485.01 00:26:33.392 clat (msec): min=12, max=305, avg=63.65, stdev=75.00 00:26:33.392 lat (msec): min=12, max=305, avg=63.68, stdev=74.99 00:26:33.392 clat percentiles (msec): 00:26:33.392 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.392 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.392 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 249], 95.00th=[ 253], 00:26:33.392 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 305], 00:26:33.392 | 99.99th=[ 305] 00:26:33.392 bw ( KiB/s): min= 256, max= 2048, per=4.29%, avg=1004.80, stdev=850.35, samples=20 00:26:33.392 iops : min= 64, max= 512, avg=251.20, stdev=212.59, samples=20 00:26:33.392 lat (msec) : 20=0.63%, 50=84.81%, 100=0.08%, 250=5.38%, 500=9.10% 00:26:33.392 cpu : usr=97.89%, sys=1.50%, ctx=36, majf=0, minf=42 00:26:33.392 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:33.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.392 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.392 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.392 filename2: (groupid=0, jobs=1): err= 0: pid=1243174: Fri Jul 12 17:15:31 2024 00:26:33.392 read: IOPS=243, BW=974KiB/s (998kB/s)(9816KiB/10076msec) 00:26:33.392 slat (usec): min=5, max=111, avg=47.54, stdev=23.54 00:26:33.392 clat (msec): min=31, max=427, avg=65.20, stdev=83.97 00:26:33.392 lat (msec): min=31, max=427, avg=65.25, stdev=83.96 00:26:33.392 clat percentiles (msec): 00:26:33.392 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:26:33.392 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:26:33.392 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 262], 00:26:33.392 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 430], 00:26:33.392 | 99.99th=[ 430] 00:26:33.392 bw ( KiB/s): min= 128, max= 1920, per=4.17%, avg=975.20, stdev=852.03, samples=20 00:26:33.392 iops : min= 32, max= 480, avg=243.80, stdev=213.01, samples=20 00:26:33.392 lat (msec) : 50=86.06%, 100=0.90%, 250=2.44%, 500=10.59% 00:26:33.392 cpu : usr=97.94%, sys=1.44%, ctx=60, majf=0, minf=37 00:26:33.392 IO depths : 1=5.6%, 2=11.6%, 4=24.1%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:33.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.392 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.392 issued rwts: total=2454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:33.392 00:26:33.392 Run status group 0 (all jobs): 00:26:33.392 READ: bw=22.8MiB/s (23.9MB/s), 940KiB/s-1008KiB/s (963kB/s-1033kB/s), io=230MiB (242MB), run=10017-10091msec 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 bdev_null0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 [2024-07-12 17:15:31.774923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 bdev_null1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:33.392 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.392 { 00:26:33.393 "params": { 00:26:33.393 "name": "Nvme$subsystem", 00:26:33.393 "trtype": "$TEST_TRANSPORT", 00:26:33.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.393 "adrfam": "ipv4", 00:26:33.393 "trsvcid": "$NVMF_PORT", 00:26:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.393 "hdgst": ${hdgst:-false}, 00:26:33.393 "ddgst": ${ddgst:-false} 00:26:33.393 }, 00:26:33.393 "method": "bdev_nvme_attach_controller" 00:26:33.393 } 00:26:33.393 EOF 00:26:33.393 )") 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.393 { 00:26:33.393 "params": { 00:26:33.393 "name": "Nvme$subsystem", 00:26:33.393 "trtype": "$TEST_TRANSPORT", 00:26:33.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.393 "adrfam": "ipv4", 00:26:33.393 "trsvcid": "$NVMF_PORT", 00:26:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.393 "hdgst": ${hdgst:-false}, 00:26:33.393 "ddgst": ${ddgst:-false} 00:26:33.393 }, 00:26:33.393 "method": "bdev_nvme_attach_controller" 00:26:33.393 } 00:26:33.393 EOF 00:26:33.393 )") 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:33.393 "params": { 00:26:33.393 "name": "Nvme0", 00:26:33.393 "trtype": "tcp", 00:26:33.393 "traddr": "10.0.0.2", 00:26:33.393 "adrfam": "ipv4", 00:26:33.393 "trsvcid": "4420", 00:26:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:33.393 "hdgst": false, 00:26:33.393 "ddgst": false 00:26:33.393 }, 00:26:33.393 "method": "bdev_nvme_attach_controller" 00:26:33.393 },{ 00:26:33.393 "params": { 00:26:33.393 "name": "Nvme1", 00:26:33.393 "trtype": "tcp", 00:26:33.393 "traddr": "10.0.0.2", 00:26:33.393 "adrfam": "ipv4", 00:26:33.393 "trsvcid": "4420", 00:26:33.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.393 "hdgst": false, 00:26:33.393 "ddgst": false 00:26:33.393 }, 00:26:33.393 "method": "bdev_nvme_attach_controller" 00:26:33.393 }' 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:33.393 17:15:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:33.393 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:33.393 ... 00:26:33.393 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:33.393 ... 00:26:33.393 fio-3.35 00:26:33.393 Starting 4 threads 00:26:33.393 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.674 00:26:38.674 filename0: (groupid=0, jobs=1): err= 0: pid=1244553: Fri Jul 12 17:15:37 2024 00:26:38.674 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(79.5MiB/5001msec) 00:26:38.674 slat (nsec): min=3600, max=65794, avg=18280.24, stdev=8684.16 00:26:38.674 clat (usec): min=759, max=7610, avg=3863.46, stdev=467.56 00:26:38.674 lat (usec): min=773, max=7625, avg=3881.74, stdev=468.49 00:26:38.674 clat percentiles (usec): 00:26:38.674 | 1.00th=[ 2147], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3654], 00:26:38.674 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916], 00:26:38.674 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4359], 00:26:38.674 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 6915], 99.95th=[ 7046], 00:26:38.674 | 99.99th=[ 7570] 00:26:38.674 bw ( KiB/s): min=15584, max=17008, per=25.09%, avg=16266.67, stdev=440.15, samples=9 00:26:38.674 iops : min= 1948, max= 2126, avg=2033.33, stdev=55.02, samples=9 00:26:38.674 lat (usec) : 1000=0.09% 00:26:38.674 lat (msec) : 2=0.71%, 4=70.67%, 10=28.53% 00:26:38.674 cpu : usr=95.92%, sys=3.58%, ctx=15, majf=0, minf=9 00:26:38.674 IO depths : 1=0.5%, 2=19.4%, 4=54.2%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 issued rwts: total=10172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.674 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.674 filename0: (groupid=0, jobs=1): err= 0: pid=1244554: Fri Jul 12 17:15:37 2024 00:26:38.674 read: IOPS=2023, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5003msec) 00:26:38.674 slat (nsec): min=3750, max=65246, avg=17030.60, stdev=8192.68 00:26:38.674 clat (usec): min=918, max=8719, avg=3896.01, stdev=441.04 00:26:38.674 lat (usec): min=936, max=8732, avg=3913.04, stdev=441.63 00:26:38.674 clat percentiles (usec): 00:26:38.674 | 1.00th=[ 2606], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3654], 00:26:38.674 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:26:38.674 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4359], 00:26:38.674 | 99.00th=[ 5604], 99.50th=[ 6325], 99.90th=[ 6915], 99.95th=[ 7046], 00:26:38.674 | 99.99th=[ 8717] 00:26:38.674 bw ( KiB/s): min=15616, max=16704, per=24.97%, avg=16188.90, stdev=360.28, samples=10 00:26:38.674 iops : min= 1952, max= 2088, avg=2023.60, stdev=45.02, samples=10 00:26:38.674 lat (usec) : 1000=0.02% 00:26:38.674 lat (msec) : 2=0.46%, 4=67.79%, 10=31.73% 00:26:38.674 cpu : usr=94.18%, sys=5.24%, ctx=45, majf=0, minf=9 00:26:38.674 IO depths : 1=0.4%, 2=13.0%, 4=59.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 issued rwts: total=10124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.674 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.674 filename1: (groupid=0, jobs=1): err= 0: pid=1244555: Fri Jul 12 17:15:37 2024 00:26:38.674 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5004msec) 00:26:38.674 slat (nsec): min=3871, max=66777, avg=16739.28, stdev=7779.11 00:26:38.674 clat (usec): min=717, max=7487, avg=3901.09, stdev=468.88 00:26:38.674 lat (usec): min=730, max=7502, avg=3917.83, stdev=469.44 00:26:38.674 clat percentiles (usec): 00:26:38.674 | 1.00th=[ 2376], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3654], 00:26:38.674 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:26:38.674 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4424], 00:26:38.674 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[ 7242], 00:26:38.674 | 99.99th=[ 7504] 00:26:38.674 bw ( KiB/s): min=15616, max=16816, per=24.92%, avg=16155.20, stdev=340.67, samples=10 00:26:38.674 iops : min= 1952, max= 2102, avg=2019.40, stdev=42.58, samples=10 00:26:38.674 lat (usec) : 750=0.02%, 1000=0.05% 00:26:38.674 lat (msec) : 2=0.66%, 4=68.11%, 10=31.15% 00:26:38.674 cpu : usr=94.52%, sys=4.76%, ctx=68, majf=0, minf=9 00:26:38.674 IO depths : 1=0.4%, 2=16.0%, 4=56.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 issued rwts: total=10105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.674 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.674 filename1: (groupid=0, jobs=1): err= 0: pid=1244556: Fri Jul 12 17:15:37 2024 00:26:38.674 read: IOPS=2028, BW=15.9MiB/s (16.6MB/s)(79.3MiB/5001msec) 00:26:38.674 slat (nsec): min=3899, max=65938, avg=18055.86, stdev=8862.73 00:26:38.674 clat (usec): min=742, max=7711, avg=3872.56, stdev=507.51 00:26:38.674 lat (usec): min=756, max=7725, avg=3890.62, stdev=508.32 00:26:38.674 clat percentiles (usec): 00:26:38.674 | 1.00th=[ 2024], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3654], 00:26:38.674 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916], 00:26:38.674 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:26:38.674 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7111], 00:26:38.674 | 99.99th=[ 7439] 00:26:38.674 bw ( KiB/s): min=15680, max=17200, per=25.08%, avg=16259.56, stdev=439.77, samples=9 00:26:38.674 iops : min= 1960, max= 2150, avg=2032.44, stdev=54.97, samples=9 00:26:38.674 lat (usec) : 750=0.01%, 1000=0.17% 00:26:38.674 lat (msec) : 2=0.79%, 4=70.58%, 10=28.45% 00:26:38.674 cpu : usr=95.32%, sys=4.18%, ctx=7, majf=0, minf=9 00:26:38.674 IO depths : 1=0.4%, 2=19.6%, 4=53.8%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.674 issued rwts: total=10147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.674 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.674 00:26:38.674 Run status group 0 (all jobs): 00:26:38.674 READ: bw=63.3MiB/s (66.4MB/s), 15.8MiB/s-15.9MiB/s (16.5MB/s-16.7MB/s), io=317MiB (332MB), run=5001-5004msec 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.674 17:15:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:38.675 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.675 00:26:38.675 real 0m24.372s 00:26:38.675 user 4m34.140s 00:26:38.675 sys 0m6.825s 00:26:38.675 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 ************************************ 00:26:38.675 END TEST fio_dif_rand_params 00:26:38.675 ************************************ 00:26:38.675 17:15:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:38.675 17:15:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:38.675 17:15:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:38.675 17:15:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 ************************************ 00:26:38.675 START TEST fio_dif_digest 00:26:38.675 ************************************ 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 bdev_null0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.675 [2024-07-12 17:15:38.293760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.675 { 00:26:38.675 "params": { 00:26:38.675 "name": "Nvme$subsystem", 00:26:38.675 "trtype": "$TEST_TRANSPORT", 00:26:38.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.675 "adrfam": "ipv4", 00:26:38.675 "trsvcid": "$NVMF_PORT", 00:26:38.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.675 "hdgst": ${hdgst:-false}, 00:26:38.675 "ddgst": ${ddgst:-false} 00:26:38.675 }, 00:26:38.675 "method": "bdev_nvme_attach_controller" 00:26:38.675 } 00:26:38.675 EOF 00:26:38.675 )") 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.675 "params": { 00:26:38.675 "name": "Nvme0", 00:26:38.675 "trtype": "tcp", 00:26:38.675 "traddr": "10.0.0.2", 00:26:38.675 "adrfam": "ipv4", 00:26:38.675 "trsvcid": "4420", 00:26:38.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.675 "hdgst": true, 00:26:38.675 "ddgst": true 00:26:38.675 }, 00:26:38.675 "method": "bdev_nvme_attach_controller" 00:26:38.675 }' 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:38.675 17:15:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.933 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:38.933 ... 00:26:38.933 fio-3.35 00:26:38.933 Starting 3 threads 00:26:38.933 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.122 00:26:51.122 filename0: (groupid=0, jobs=1): err= 0: pid=1245421: Fri Jul 12 17:15:49 2024 00:26:51.122 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10045msec) 00:26:51.122 slat (nsec): min=4535, max=65744, avg=15644.65, stdev=4724.81 00:26:51.122 clat (usec): min=8706, max=55657, avg=14855.02, stdev=1563.02 00:26:51.122 lat (usec): min=8714, max=55678, avg=14870.67, stdev=1563.10 00:26:51.122 clat percentiles (usec): 00:26:51.122 | 1.00th=[12125], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:26:51.122 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:26:51.122 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16057], 95.00th=[16581], 00:26:51.122 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19792], 99.95th=[47449], 00:26:51.122 | 99.99th=[55837] 00:26:51.122 bw ( KiB/s): min=24576, max=27136, per=32.70%, avg=25868.80, stdev=566.22, samples=20 00:26:51.122 iops : min= 192, max= 212, avg=202.10, stdev= 4.42, samples=20 00:26:51.122 lat (msec) : 10=0.20%, 20=99.70%, 50=0.05%, 100=0.05% 00:26:51.122 cpu : usr=91.30%, sys=8.17%, ctx=27, majf=0, minf=227 00:26:51.122 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.122 filename0: (groupid=0, jobs=1): err= 0: pid=1245422: Fri Jul 12 17:15:49 2024 00:26:51.122 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(266MiB/10046msec) 00:26:51.122 slat (nsec): min=4322, max=52889, avg=15043.57, stdev=4644.31 00:26:51.122 clat (usec): min=10538, max=59654, avg=14150.52, stdev=2286.35 00:26:51.122 lat (usec): min=10551, max=59670, avg=14165.56, stdev=2286.36 00:26:51.122 clat percentiles (usec): 00:26:51.122 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:26:51.122 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:26:51.122 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:26:51.122 | 99.00th=[16581], 99.50th=[17171], 99.90th=[59507], 99.95th=[59507], 00:26:51.122 | 99.99th=[59507] 00:26:51.122 bw ( KiB/s): min=24832, max=28160, per=34.33%, avg=27151.40, stdev=774.15, samples=20 00:26:51.122 iops : min= 194, max= 220, avg=212.10, stdev= 6.07, samples=20 00:26:51.122 lat (msec) : 20=99.76%, 50=0.05%, 100=0.19% 00:26:51.122 cpu : usr=90.87%, sys=8.63%, ctx=18, majf=0, minf=151 00:26:51.122 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.122 filename0: (groupid=0, jobs=1): err= 0: pid=1245423: Fri Jul 12 17:15:49 2024 00:26:51.122 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(258MiB/10045msec) 00:26:51.122 slat (nsec): min=4283, max=50202, avg=15128.28, stdev=4629.10 00:26:51.122 clat (usec): min=8098, max=50524, avg=14581.51, stdev=1501.86 00:26:51.122 lat (usec): min=8112, max=50544, avg=14596.64, stdev=1501.90 00:26:51.122 clat percentiles (usec): 00:26:51.122 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:26:51.122 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:26:51.122 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:26:51.122 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[47973], 00:26:51.122 | 99.99th=[50594] 00:26:51.122 bw ( KiB/s): min=25600, max=27136, per=33.31%, avg=26347.65, stdev=431.44, samples=20 00:26:51.122 iops : min= 200, max= 212, avg=205.80, stdev= 3.37, samples=20 00:26:51.122 lat (msec) : 10=0.34%, 20=99.56%, 50=0.05%, 100=0.05% 00:26:51.122 cpu : usr=90.90%, sys=8.43%, ctx=43, majf=0, minf=107 00:26:51.122 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.122 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.122 00:26:51.122 Run status group 0 (all jobs): 00:26:51.122 READ: bw=77.2MiB/s (81.0MB/s), 25.2MiB/s-26.4MiB/s (26.4MB/s-27.7MB/s), io=776MiB (814MB), run=10045-10046msec 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.122 00:26:51.122 real 0m11.074s 00:26:51.122 user 0m28.577s 00:26:51.122 sys 0m2.803s 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.122 17:15:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 ************************************ 00:26:51.122 END TEST fio_dif_digest 00:26:51.122 ************************************ 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:51.122 17:15:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:51.122 17:15:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.122 rmmod nvme_tcp 00:26:51.122 rmmod nvme_fabrics 00:26:51.122 rmmod nvme_keyring 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1238638 ']' 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1238638 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1238638 ']' 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1238638 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1238638 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1238638' 00:26:51.122 killing process with pid 1238638 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1238638 00:26:51.122 17:15:49 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1238638 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:51.122 17:15:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.122 Waiting for block devices as requested 00:26:51.384 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:51.384 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:51.696 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:51.696 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:51.696 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:51.696 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:51.961 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:51.961 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:51.961 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:51.961 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:52.220 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:52.220 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:52.220 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:52.220 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:52.480 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:52.480 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:52.480 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:52.738 17:15:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.738 17:15:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.738 17:15:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.738 17:15:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.738 17:15:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.738 17:15:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:52.738 17:15:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.639 17:15:54 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.639 00:26:54.639 real 1m7.054s 00:26:54.639 user 6m29.542s 00:26:54.639 sys 0m19.830s 00:26:54.639 17:15:54 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.639 17:15:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.639 ************************************ 00:26:54.639 END TEST nvmf_dif 00:26:54.639 ************************************ 00:26:54.639 17:15:54 -- common/autotest_common.sh@1142 -- # return 0 00:26:54.639 17:15:54 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:54.639 17:15:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:54.639 17:15:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.639 17:15:54 -- common/autotest_common.sh@10 -- # set +x 00:26:54.639 ************************************ 00:26:54.639 START TEST nvmf_abort_qd_sizes 00:26:54.639 ************************************ 00:26:54.639 17:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:54.639 * Looking for test storage... 00:26:54.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.897 17:15:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.898 17:15:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:56.800 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:56.800 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:56.800 Found net devices under 0000:84:00.0: cvl_0_0 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:56.800 Found net devices under 0000:84:00.1: cvl_0_1 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.800 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:26:57.059 00:26:57.059 --- 10.0.0.2 ping statistics --- 00:26:57.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.059 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:26:57.059 00:26:57.059 --- 10.0.0.1 ping statistics --- 00:26:57.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.059 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:57.059 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:58.433 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.433 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:58.433 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:59.369 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1250252 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1250252 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1250252 ']' 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.369 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.369 [2024-07-12 17:15:59.021427] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:26:59.369 [2024-07-12 17:15:59.021528] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.369 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.626 [2024-07-12 17:15:59.087686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.626 [2024-07-12 17:15:59.200850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.626 [2024-07-12 17:15:59.200909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.626 [2024-07-12 17:15:59.200937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.626 [2024-07-12 17:15:59.200948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.626 [2024-07-12 17:15:59.200958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.626 [2024-07-12 17:15:59.201008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.626 [2024-07-12 17:15:59.201067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.626 [2024-07-12 17:15:59.201132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.626 [2024-07-12 17:15:59.201135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.883 17:15:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:59.883 ************************************ 00:26:59.883 START TEST spdk_target_abort 00:26:59.883 ************************************ 00:26:59.883 17:15:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:59.883 17:15:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:59.884 17:15:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:26:59.884 17:15:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.884 17:15:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.158 spdk_targetn1 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.158 [2024-07-12 17:16:02.238701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:03.158 [2024-07-12 17:16:02.270978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.158 17:16:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:03.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.431 Initializing NVMe Controllers 00:27:06.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:06.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:06.431 Initialization complete. Launching workers. 00:27:06.431 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11462, failed: 0 00:27:06.431 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1324, failed to submit 10138 00:27:06.431 success 701, unsuccess 623, failed 0 00:27:06.431 17:16:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:06.431 17:16:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:06.431 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.711 Initializing NVMe Controllers 00:27:09.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:09.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:09.711 Initialization complete. Launching workers. 00:27:09.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8767, failed: 0 00:27:09.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1224, failed to submit 7543 00:27:09.711 success 350, unsuccess 874, failed 0 00:27:09.711 17:16:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:09.711 17:16:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.711 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.991 Initializing NVMe Controllers 00:27:12.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:12.991 Initialization complete. Launching workers. 00:27:12.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32019, failed: 0 00:27:12.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2830, failed to submit 29189 00:27:12.991 success 539, unsuccess 2291, failed 0 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.991 17:16:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1250252 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1250252 ']' 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1250252 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1250252 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1250252' 00:27:13.924 killing process with pid 1250252 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1250252 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1250252 00:27:13.924 00:27:13.924 real 0m14.210s 00:27:13.924 user 0m53.615s 00:27:13.924 sys 0m2.872s 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.924 17:16:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.924 ************************************ 00:27:13.924 END TEST spdk_target_abort 00:27:13.924 ************************************ 00:27:14.182 17:16:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:14.182 17:16:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:14.182 17:16:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:14.182 17:16:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.182 17:16:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:14.182 ************************************ 00:27:14.182 START TEST kernel_target_abort 00:27:14.182 ************************************ 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:14.182 17:16:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.117 Waiting for block devices as requested 00:27:15.376 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:15.376 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:15.376 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:15.635 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:15.635 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:15.635 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:15.894 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:15.894 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:15.894 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:15.894 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:16.410 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:16.410 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:16.410 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:16.410 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:16.410 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:16.410 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:16.411 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:16.669 No valid GPT data, bailing 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:16.669 00:27:16.669 Discovery Log Number of Records 2, Generation counter 2 00:27:16.669 =====Discovery Log Entry 0====== 00:27:16.669 trtype: tcp 00:27:16.669 adrfam: ipv4 00:27:16.669 subtype: current discovery subsystem 00:27:16.669 treq: not specified, sq flow control disable supported 00:27:16.669 portid: 1 00:27:16.669 trsvcid: 4420 00:27:16.669 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:16.669 traddr: 10.0.0.1 00:27:16.669 eflags: none 00:27:16.669 sectype: none 00:27:16.669 =====Discovery Log Entry 1====== 00:27:16.669 trtype: tcp 00:27:16.669 adrfam: ipv4 00:27:16.669 subtype: nvme subsystem 00:27:16.669 treq: not specified, sq flow control disable supported 00:27:16.669 portid: 1 00:27:16.669 trsvcid: 4420 00:27:16.669 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:16.669 traddr: 10.0.0.1 00:27:16.669 eflags: none 00:27:16.669 sectype: none 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:16.669 17:16:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.669 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.951 Initializing NVMe Controllers 00:27:19.951 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:19.951 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:19.951 Initialization complete. Launching workers. 00:27:19.951 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53571, failed: 0 00:27:19.951 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 53571, failed to submit 0 00:27:19.951 success 0, unsuccess 53571, failed 0 00:27:19.951 17:16:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.951 17:16:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.304 Initializing NVMe Controllers 00:27:23.304 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.304 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:23.304 Initialization complete. Launching workers. 00:27:23.304 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97531, failed: 0 00:27:23.304 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24562, failed to submit 72969 00:27:23.304 success 0, unsuccess 24562, failed 0 00:27:23.304 17:16:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:23.304 17:16:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.304 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.597 Initializing NVMe Controllers 00:27:26.597 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:26.597 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:26.597 Initialization complete. Launching workers. 00:27:26.597 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94656, failed: 0 00:27:26.597 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23666, failed to submit 70990 00:27:26.597 success 0, unsuccess 23666, failed 0 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:26.597 17:16:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:27.162 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:27.163 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:27.163 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:27.420 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:28.353 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:28.353 00:27:28.353 real 0m14.285s 00:27:28.353 user 0m6.429s 00:27:28.353 sys 0m3.259s 00:27:28.353 17:16:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.353 17:16:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:28.353 ************************************ 00:27:28.353 END TEST kernel_target_abort 00:27:28.353 ************************************ 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.353 17:16:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.353 rmmod nvme_tcp 00:27:28.353 rmmod nvme_fabrics 00:27:28.353 rmmod nvme_keyring 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1250252 ']' 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1250252 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1250252 ']' 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1250252 00:27:28.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1250252) - No such process 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1250252 is not found' 00:27:28.353 Process with pid 1250252 is not found 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:28.353 17:16:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:29.726 Waiting for block devices as requested 00:27:29.726 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:29.726 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:29.985 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:29.985 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:29.985 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:29.985 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:30.243 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:30.243 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:30.243 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:30.243 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:30.500 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:30.500 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:30.500 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:30.500 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:30.759 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:30.759 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:30.759 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.018 17:16:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.918 17:16:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.918 00:27:32.918 real 0m38.293s 00:27:32.918 user 1m2.297s 00:27:32.918 sys 0m9.652s 00:27:32.918 17:16:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.918 17:16:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:32.918 ************************************ 00:27:32.918 END TEST nvmf_abort_qd_sizes 00:27:32.918 ************************************ 00:27:32.918 17:16:32 -- common/autotest_common.sh@1142 -- # return 0 00:27:32.918 17:16:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:32.918 17:16:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:32.918 17:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.918 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:27:33.176 ************************************ 00:27:33.176 START TEST keyring_file 00:27:33.176 ************************************ 00:27:33.176 17:16:32 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:33.176 * Looking for test storage... 00:27:33.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.176 17:16:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.176 17:16:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.176 17:16:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.176 17:16:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.176 17:16:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.176 17:16:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.176 17:16:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:33.176 17:16:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iakE7WfcMI 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iakE7WfcMI 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iakE7WfcMI 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iakE7WfcMI 00:27:33.176 17:16:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y8ZO252Pyz 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:33.176 17:16:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:33.176 17:16:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y8ZO252Pyz 00:27:33.177 17:16:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y8ZO252Pyz 00:27:33.177 17:16:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.y8ZO252Pyz 00:27:33.177 17:16:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=1256032 00:27:33.177 17:16:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:33.177 17:16:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1256032 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1256032 ']' 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.177 17:16:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.177 [2024-07-12 17:16:32.850007] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:27:33.177 [2024-07-12 17:16:32.850119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256032 ] 00:27:33.433 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.433 [2024-07-12 17:16:32.909170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.433 [2024-07-12 17:16:33.023480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:33.690 17:16:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.690 [2024-07-12 17:16:33.291761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.690 null0 00:27:33.690 [2024-07-12 17:16:33.323837] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:33.690 [2024-07-12 17:16:33.324182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:33.690 [2024-07-12 17:16:33.331843] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.690 17:16:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.690 [2024-07-12 17:16:33.343879] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:33.690 request: 00:27:33.690 { 00:27:33.690 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.690 "secure_channel": false, 00:27:33.690 "listen_address": { 00:27:33.690 "trtype": "tcp", 00:27:33.690 "traddr": "127.0.0.1", 00:27:33.690 "trsvcid": "4420" 00:27:33.690 }, 00:27:33.690 "method": "nvmf_subsystem_add_listener", 00:27:33.690 "req_id": 1 00:27:33.690 } 00:27:33.690 Got JSON-RPC error response 00:27:33.690 response: 00:27:33.690 { 00:27:33.690 "code": -32602, 00:27:33.690 "message": "Invalid parameters" 00:27:33.690 } 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:33.690 17:16:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=1256044 00:27:33.690 17:16:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:33.690 17:16:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1256044 /var/tmp/bperf.sock 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1256044 ']' 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.690 17:16:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:33.948 [2024-07-12 17:16:33.391373] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:27:33.948 [2024-07-12 17:16:33.391447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256044 ] 00:27:33.948 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.948 [2024-07-12 17:16:33.449706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.948 [2024-07-12 17:16:33.562075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.205 17:16:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.205 17:16:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:34.205 17:16:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:34.205 17:16:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:34.462 17:16:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.y8ZO252Pyz 00:27:34.462 17:16:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.y8ZO252Pyz 00:27:34.721 17:16:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:34.721 17:16:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.721 17:16:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.iakE7WfcMI == \/\t\m\p\/\t\m\p\.\i\a\k\E\7\W\f\c\M\I ]] 00:27:34.721 17:16:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:34.721 17:16:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.721 17:16:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.978 17:16:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.y8ZO252Pyz == \/\t\m\p\/\t\m\p\.\y\8\Z\O\2\5\2\P\y\z ]] 00:27:34.978 17:16:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:34.978 17:16:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.978 17:16:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.978 17:16:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.978 17:16:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.978 17:16:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.236 17:16:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:35.236 17:16:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:35.236 17:16:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:35.236 17:16:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.236 17:16:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.236 17:16:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:35.236 17:16:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.493 17:16:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:35.493 17:16:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.493 17:16:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.751 [2024-07-12 17:16:35.383323] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:36.008 nvme0n1 00:27:36.008 17:16:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:36.008 17:16:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.008 17:16:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.008 17:16:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.008 17:16:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.008 17:16:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.266 17:16:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:36.266 17:16:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:36.266 17:16:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:36.266 17:16:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.266 17:16:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.266 17:16:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.266 17:16:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:36.525 17:16:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:36.525 17:16:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.525 Running I/O for 1 seconds... 00:27:37.459 00:27:37.459 Latency(us) 00:27:37.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.459 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:37.459 nvme0n1 : 1.01 9736.99 38.04 0.00 0.00 13097.56 6893.42 23592.96 00:27:37.459 =================================================================================================================== 00:27:37.459 Total : 9736.99 38.04 0.00 0.00 13097.56 6893.42 23592.96 00:27:37.459 0 00:27:37.459 17:16:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:37.459 17:16:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:37.716 17:16:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:37.716 17:16:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.716 17:16:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.716 17:16:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.716 17:16:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.716 17:16:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.974 17:16:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:37.974 17:16:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:37.974 17:16:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:37.974 17:16:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.974 17:16:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.974 17:16:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.974 17:16:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:38.232 17:16:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:38.232 17:16:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.232 17:16:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.232 17:16:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:38.490 [2024-07-12 17:16:38.071403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:38.490 [2024-07-12 17:16:38.071773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ebd0 (107): Transport endpoint is not connected 00:27:38.490 [2024-07-12 17:16:38.072761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ebd0 (9): Bad file descriptor 00:27:38.490 [2024-07-12 17:16:38.073760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.490 [2024-07-12 17:16:38.073778] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:38.490 [2024-07-12 17:16:38.073792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.490 request: 00:27:38.490 { 00:27:38.490 "name": "nvme0", 00:27:38.490 "trtype": "tcp", 00:27:38.490 "traddr": "127.0.0.1", 00:27:38.490 "adrfam": "ipv4", 00:27:38.490 "trsvcid": "4420", 00:27:38.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.490 "prchk_reftag": false, 00:27:38.490 "prchk_guard": false, 00:27:38.490 "hdgst": false, 00:27:38.490 "ddgst": false, 00:27:38.490 "psk": "key1", 00:27:38.490 "method": "bdev_nvme_attach_controller", 00:27:38.490 "req_id": 1 00:27:38.490 } 00:27:38.490 Got JSON-RPC error response 00:27:38.490 response: 00:27:38.490 { 00:27:38.490 "code": -5, 00:27:38.490 "message": "Input/output error" 00:27:38.490 } 00:27:38.490 17:16:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:38.490 17:16:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.490 17:16:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:38.490 17:16:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.490 17:16:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:38.490 17:16:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:38.490 17:16:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.490 17:16:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.490 17:16:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.490 17:16:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.748 17:16:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:38.748 17:16:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:38.748 17:16:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:38.748 17:16:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.748 17:16:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.748 17:16:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.748 17:16:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:39.005 17:16:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:39.005 17:16:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:39.005 17:16:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:39.263 17:16:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:39.263 17:16:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:39.522 17:16:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:39.522 17:16:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:39.522 17:16:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.780 17:16:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:39.780 17:16:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.iakE7WfcMI 00:27:39.780 17:16:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.780 17:16:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:39.780 17:16:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:40.038 [2024-07-12 17:16:39.571694] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iakE7WfcMI': 0100660 00:27:40.038 [2024-07-12 17:16:39.571759] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:40.038 request: 00:27:40.038 { 00:27:40.038 "name": "key0", 00:27:40.038 "path": "/tmp/tmp.iakE7WfcMI", 00:27:40.038 "method": "keyring_file_add_key", 00:27:40.038 "req_id": 1 00:27:40.038 } 00:27:40.038 Got JSON-RPC error response 00:27:40.038 response: 00:27:40.038 { 00:27:40.038 "code": -1, 00:27:40.038 "message": "Operation not permitted" 00:27:40.038 } 00:27:40.038 17:16:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:40.038 17:16:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.038 17:16:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.038 17:16:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.038 17:16:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.iakE7WfcMI 00:27:40.038 17:16:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:40.038 17:16:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iakE7WfcMI 00:27:40.296 17:16:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.iakE7WfcMI 00:27:40.296 17:16:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:40.296 17:16:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:40.296 17:16:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.296 17:16:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.296 17:16:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.296 17:16:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:40.555 17:16:40 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:40.555 17:16:40 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.555 17:16:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.555 17:16:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:40.813 [2024-07-12 17:16:40.309681] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iakE7WfcMI': No such file or directory 00:27:40.813 [2024-07-12 17:16:40.309732] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:40.813 [2024-07-12 17:16:40.309767] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:40.813 [2024-07-12 17:16:40.309779] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:40.813 [2024-07-12 17:16:40.309806] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:40.813 request: 00:27:40.813 { 00:27:40.813 "name": "nvme0", 00:27:40.813 "trtype": "tcp", 00:27:40.813 "traddr": "127.0.0.1", 00:27:40.813 "adrfam": "ipv4", 00:27:40.813 "trsvcid": "4420", 00:27:40.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.813 "prchk_reftag": false, 00:27:40.813 "prchk_guard": false, 00:27:40.813 "hdgst": false, 00:27:40.813 "ddgst": false, 00:27:40.813 "psk": "key0", 00:27:40.813 "method": "bdev_nvme_attach_controller", 00:27:40.813 "req_id": 1 00:27:40.813 } 00:27:40.813 Got JSON-RPC error response 00:27:40.813 response: 00:27:40.813 { 00:27:40.813 "code": -19, 00:27:40.813 "message": "No such device" 00:27:40.813 } 00:27:40.813 17:16:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:40.813 17:16:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.813 17:16:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.813 17:16:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.813 17:16:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:40.813 17:16:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:41.072 17:16:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZnP7dh94p7 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:41.072 17:16:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZnP7dh94p7 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZnP7dh94p7 00:27:41.072 17:16:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ZnP7dh94p7 00:27:41.072 17:16:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZnP7dh94p7 00:27:41.072 17:16:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZnP7dh94p7 00:27:41.331 17:16:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.331 17:16:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.589 nvme0n1 00:27:41.589 17:16:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:41.589 17:16:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.589 17:16:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.589 17:16:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.589 17:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.589 17:16:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.847 17:16:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:41.847 17:16:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:41.847 17:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:42.104 17:16:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:42.104 17:16:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:42.104 17:16:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:42.104 17:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:42.104 17:16:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:42.362 17:16:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:42.362 17:16:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:42.362 17:16:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:42.362 17:16:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:42.362 17:16:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:42.362 17:16:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:42.362 17:16:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:42.619 17:16:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:42.619 17:16:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:42.619 17:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:42.876 17:16:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:42.876 17:16:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:42.876 17:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.134 17:16:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:43.134 17:16:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZnP7dh94p7 00:27:43.134 17:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZnP7dh94p7 00:27:43.391 17:16:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.y8ZO252Pyz 00:27:43.391 17:16:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.y8ZO252Pyz 00:27:43.648 17:16:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:43.648 17:16:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:43.906 nvme0n1 00:27:43.906 17:16:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:43.906 17:16:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:44.164 17:16:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:44.164 "subsystems": [ 00:27:44.164 { 00:27:44.164 "subsystem": "keyring", 00:27:44.164 "config": [ 00:27:44.164 { 00:27:44.164 "method": "keyring_file_add_key", 00:27:44.164 "params": { 00:27:44.164 "name": "key0", 00:27:44.164 "path": "/tmp/tmp.ZnP7dh94p7" 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "keyring_file_add_key", 00:27:44.164 "params": { 00:27:44.164 "name": "key1", 00:27:44.164 "path": "/tmp/tmp.y8ZO252Pyz" 00:27:44.164 } 00:27:44.164 } 00:27:44.164 ] 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "subsystem": "iobuf", 00:27:44.164 "config": [ 00:27:44.164 { 00:27:44.164 "method": "iobuf_set_options", 00:27:44.164 "params": { 00:27:44.164 "small_pool_count": 8192, 00:27:44.164 "large_pool_count": 1024, 00:27:44.164 "small_bufsize": 8192, 00:27:44.164 "large_bufsize": 135168 00:27:44.164 } 00:27:44.164 } 00:27:44.164 ] 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "subsystem": "sock", 00:27:44.164 "config": [ 00:27:44.164 { 00:27:44.164 "method": "sock_set_default_impl", 00:27:44.164 "params": { 00:27:44.164 "impl_name": "posix" 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "sock_impl_set_options", 00:27:44.164 "params": { 00:27:44.164 "impl_name": "ssl", 00:27:44.164 "recv_buf_size": 4096, 00:27:44.164 "send_buf_size": 4096, 00:27:44.164 "enable_recv_pipe": true, 00:27:44.164 "enable_quickack": false, 00:27:44.164 "enable_placement_id": 0, 00:27:44.164 "enable_zerocopy_send_server": true, 00:27:44.164 "enable_zerocopy_send_client": false, 00:27:44.164 "zerocopy_threshold": 0, 00:27:44.164 "tls_version": 0, 00:27:44.164 "enable_ktls": false 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "sock_impl_set_options", 00:27:44.164 "params": { 00:27:44.164 "impl_name": "posix", 00:27:44.164 "recv_buf_size": 2097152, 00:27:44.164 "send_buf_size": 2097152, 00:27:44.164 "enable_recv_pipe": true, 00:27:44.164 "enable_quickack": false, 00:27:44.164 "enable_placement_id": 0, 00:27:44.164 "enable_zerocopy_send_server": true, 00:27:44.164 "enable_zerocopy_send_client": false, 00:27:44.164 "zerocopy_threshold": 0, 00:27:44.164 "tls_version": 0, 00:27:44.164 "enable_ktls": false 00:27:44.164 } 00:27:44.164 } 00:27:44.164 ] 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "subsystem": "vmd", 00:27:44.164 "config": [] 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "subsystem": "accel", 00:27:44.164 "config": [ 00:27:44.164 { 00:27:44.164 "method": "accel_set_options", 00:27:44.164 "params": { 00:27:44.164 "small_cache_size": 128, 00:27:44.164 "large_cache_size": 16, 00:27:44.164 "task_count": 2048, 00:27:44.164 "sequence_count": 2048, 00:27:44.164 "buf_count": 2048 00:27:44.164 } 00:27:44.164 } 00:27:44.164 ] 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "subsystem": "bdev", 00:27:44.164 "config": [ 00:27:44.164 { 00:27:44.164 "method": "bdev_set_options", 00:27:44.164 "params": { 00:27:44.164 "bdev_io_pool_size": 65535, 00:27:44.164 "bdev_io_cache_size": 256, 00:27:44.164 "bdev_auto_examine": true, 00:27:44.164 "iobuf_small_cache_size": 128, 00:27:44.164 "iobuf_large_cache_size": 16 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "bdev_raid_set_options", 00:27:44.164 "params": { 00:27:44.164 "process_window_size_kb": 1024 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "bdev_iscsi_set_options", 00:27:44.164 "params": { 00:27:44.164 "timeout_sec": 30 00:27:44.164 } 00:27:44.164 }, 00:27:44.164 { 00:27:44.164 "method": "bdev_nvme_set_options", 00:27:44.164 "params": { 00:27:44.164 "action_on_timeout": "none", 00:27:44.164 "timeout_us": 0, 00:27:44.164 "timeout_admin_us": 0, 00:27:44.165 "keep_alive_timeout_ms": 10000, 00:27:44.165 "arbitration_burst": 0, 00:27:44.165 "low_priority_weight": 0, 00:27:44.165 "medium_priority_weight": 0, 00:27:44.165 "high_priority_weight": 0, 00:27:44.165 "nvme_adminq_poll_period_us": 10000, 00:27:44.165 "nvme_ioq_poll_period_us": 0, 00:27:44.165 "io_queue_requests": 512, 00:27:44.165 "delay_cmd_submit": true, 00:27:44.165 "transport_retry_count": 4, 00:27:44.165 "bdev_retry_count": 3, 00:27:44.165 "transport_ack_timeout": 0, 00:27:44.165 "ctrlr_loss_timeout_sec": 0, 00:27:44.165 "reconnect_delay_sec": 0, 00:27:44.165 "fast_io_fail_timeout_sec": 0, 00:27:44.165 "disable_auto_failback": false, 00:27:44.165 "generate_uuids": false, 00:27:44.165 "transport_tos": 0, 00:27:44.165 "nvme_error_stat": false, 00:27:44.165 "rdma_srq_size": 0, 00:27:44.165 "io_path_stat": false, 00:27:44.165 "allow_accel_sequence": false, 00:27:44.165 "rdma_max_cq_size": 0, 00:27:44.165 "rdma_cm_event_timeout_ms": 0, 00:27:44.165 "dhchap_digests": [ 00:27:44.165 "sha256", 00:27:44.165 "sha384", 00:27:44.165 "sha512" 00:27:44.165 ], 00:27:44.165 "dhchap_dhgroups": [ 00:27:44.165 "null", 00:27:44.165 "ffdhe2048", 00:27:44.165 "ffdhe3072", 00:27:44.165 "ffdhe4096", 00:27:44.165 "ffdhe6144", 00:27:44.165 "ffdhe8192" 00:27:44.165 ] 00:27:44.165 } 00:27:44.165 }, 00:27:44.165 { 00:27:44.165 "method": "bdev_nvme_attach_controller", 00:27:44.165 "params": { 00:27:44.165 "name": "nvme0", 00:27:44.165 "trtype": "TCP", 00:27:44.165 "adrfam": "IPv4", 00:27:44.165 "traddr": "127.0.0.1", 00:27:44.165 "trsvcid": "4420", 00:27:44.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.165 "prchk_reftag": false, 00:27:44.165 "prchk_guard": false, 00:27:44.165 "ctrlr_loss_timeout_sec": 0, 00:27:44.165 "reconnect_delay_sec": 0, 00:27:44.165 "fast_io_fail_timeout_sec": 0, 00:27:44.165 "psk": "key0", 00:27:44.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.165 "hdgst": false, 00:27:44.165 "ddgst": false 00:27:44.165 } 00:27:44.165 }, 00:27:44.165 { 00:27:44.165 "method": "bdev_nvme_set_hotplug", 00:27:44.165 "params": { 00:27:44.165 "period_us": 100000, 00:27:44.165 "enable": false 00:27:44.165 } 00:27:44.165 }, 00:27:44.165 { 00:27:44.165 "method": "bdev_wait_for_examine" 00:27:44.165 } 00:27:44.165 ] 00:27:44.165 }, 00:27:44.165 { 00:27:44.165 "subsystem": "nbd", 00:27:44.165 "config": [] 00:27:44.165 } 00:27:44.165 ] 00:27:44.165 }' 00:27:44.165 17:16:43 keyring_file -- keyring/file.sh@114 -- # killprocess 1256044 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1256044 ']' 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1256044 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1256044 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1256044' 00:27:44.165 killing process with pid 1256044 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@967 -- # kill 1256044 00:27:44.165 Received shutdown signal, test time was about 1.000000 seconds 00:27:44.165 00:27:44.165 Latency(us) 00:27:44.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.165 =================================================================================================================== 00:27:44.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.165 17:16:43 keyring_file -- common/autotest_common.sh@972 -- # wait 1256044 00:27:44.423 17:16:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=1257501 00:27:44.423 17:16:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1257501 /var/tmp/bperf.sock 00:27:44.423 17:16:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1257501 ']' 00:27:44.423 17:16:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.423 17:16:44 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:44.423 17:16:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.423 17:16:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.423 17:16:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:44.423 "subsystems": [ 00:27:44.423 { 00:27:44.423 "subsystem": "keyring", 00:27:44.423 "config": [ 00:27:44.423 { 00:27:44.423 "method": "keyring_file_add_key", 00:27:44.423 "params": { 00:27:44.423 "name": "key0", 00:27:44.423 "path": "/tmp/tmp.ZnP7dh94p7" 00:27:44.423 } 00:27:44.423 }, 00:27:44.423 { 00:27:44.423 "method": "keyring_file_add_key", 00:27:44.423 "params": { 00:27:44.423 "name": "key1", 00:27:44.423 "path": "/tmp/tmp.y8ZO252Pyz" 00:27:44.423 } 00:27:44.423 } 00:27:44.423 ] 00:27:44.423 }, 00:27:44.423 { 00:27:44.423 "subsystem": "iobuf", 00:27:44.423 "config": [ 00:27:44.423 { 00:27:44.423 "method": "iobuf_set_options", 00:27:44.423 "params": { 00:27:44.423 "small_pool_count": 8192, 00:27:44.423 "large_pool_count": 1024, 00:27:44.423 "small_bufsize": 8192, 00:27:44.423 "large_bufsize": 135168 00:27:44.423 } 00:27:44.423 } 00:27:44.424 ] 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "subsystem": "sock", 00:27:44.424 "config": [ 00:27:44.424 { 00:27:44.424 "method": "sock_set_default_impl", 00:27:44.424 "params": { 00:27:44.424 "impl_name": "posix" 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "sock_impl_set_options", 00:27:44.424 "params": { 00:27:44.424 "impl_name": "ssl", 00:27:44.424 "recv_buf_size": 4096, 00:27:44.424 "send_buf_size": 4096, 00:27:44.424 "enable_recv_pipe": true, 00:27:44.424 "enable_quickack": false, 00:27:44.424 "enable_placement_id": 0, 00:27:44.424 "enable_zerocopy_send_server": true, 00:27:44.424 "enable_zerocopy_send_client": false, 00:27:44.424 "zerocopy_threshold": 0, 00:27:44.424 "tls_version": 0, 00:27:44.424 "enable_ktls": false 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "sock_impl_set_options", 00:27:44.424 "params": { 00:27:44.424 "impl_name": "posix", 00:27:44.424 "recv_buf_size": 2097152, 00:27:44.424 "send_buf_size": 2097152, 00:27:44.424 "enable_recv_pipe": true, 00:27:44.424 "enable_quickack": false, 00:27:44.424 "enable_placement_id": 0, 00:27:44.424 "enable_zerocopy_send_server": true, 00:27:44.424 "enable_zerocopy_send_client": false, 00:27:44.424 "zerocopy_threshold": 0, 00:27:44.424 "tls_version": 0, 00:27:44.424 "enable_ktls": false 00:27:44.424 } 00:27:44.424 } 00:27:44.424 ] 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "subsystem": "vmd", 00:27:44.424 "config": [] 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "subsystem": "accel", 00:27:44.424 "config": [ 00:27:44.424 { 00:27:44.424 "method": "accel_set_options", 00:27:44.424 "params": { 00:27:44.424 "small_cache_size": 128, 00:27:44.424 "large_cache_size": 16, 00:27:44.424 "task_count": 2048, 00:27:44.424 "sequence_count": 2048, 00:27:44.424 "buf_count": 2048 00:27:44.424 } 00:27:44.424 } 00:27:44.424 ] 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "subsystem": "bdev", 00:27:44.424 "config": [ 00:27:44.424 { 00:27:44.424 "method": "bdev_set_options", 00:27:44.424 "params": { 00:27:44.424 "bdev_io_pool_size": 65535, 00:27:44.424 "bdev_io_cache_size": 256, 00:27:44.424 "bdev_auto_examine": true, 00:27:44.424 "iobuf_small_cache_size": 128, 00:27:44.424 "iobuf_large_cache_size": 16 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "bdev_raid_set_options", 00:27:44.424 "params": { 00:27:44.424 "process_window_size_kb": 1024 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "bdev_iscsi_set_options", 00:27:44.424 "params": { 00:27:44.424 "timeout_sec": 30 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "bdev_nvme_set_options", 00:27:44.424 "params": { 00:27:44.424 "action_on_timeout": "none", 00:27:44.424 "timeout_us": 0, 00:27:44.424 "timeout_admin_us": 0, 00:27:44.424 "keep_alive_timeout_ms": 10000, 00:27:44.424 "arbitration_burst": 0, 00:27:44.424 "low_priority_weight": 0, 00:27:44.424 "medium_priority_weight": 0, 00:27:44.424 "high_priority_weight": 0, 00:27:44.424 "nvme_adminq_poll_period_us": 10000, 00:27:44.424 "nvme_ioq_poll_period_us": 0, 00:27:44.424 "io_queue_requests": 512, 00:27:44.424 "delay_cmd_submit": true, 00:27:44.424 "transport_retry_count": 4, 00:27:44.424 "bdev_retry_count": 3, 00:27:44.424 "transport_ack_timeout": 0, 00:27:44.424 "ctrlr_loss_timeout_sec": 0, 00:27:44.424 "reconnect_delay_sec": 0, 00:27:44.424 "fast_io_fail_timeout_sec": 0, 00:27:44.424 "disable_auto_failback": false, 00:27:44.424 "generate_uuids": false, 00:27:44.424 "transport_tos": 0, 00:27:44.424 "nvme_error_stat": false, 00:27:44.424 "rdma_srq_size": 0, 00:27:44.424 "io_path_stat": false, 00:27:44.424 "allow_accel_sequence": false, 00:27:44.424 "rdma_max_cq_size": 0, 00:27:44.424 "rdma_cm_event_timeout_ms": 0, 00:27:44.424 "dhchap_digests": [ 00:27:44.424 "sha256", 00:27:44.424 "sha384", 00:27:44.424 "sha512" 00:27:44.424 ], 00:27:44.424 "dhchap_dhgroups": [ 00:27:44.424 "null", 00:27:44.424 "ffdhe2048", 00:27:44.424 "ffdhe3072", 00:27:44.424 "ffdhe4096", 00:27:44.424 "ffdhe6144", 00:27:44.424 "ffdhe8192" 00:27:44.424 ] 00:27:44.424 } 00:27:44.424 }, 00:27:44.424 { 00:27:44.424 "method": "bdev_nvme_attach_controller", 00:27:44.424 "params": { 00:27:44.424 "name": "nvme0", 00:27:44.424 "trtype": "TCP", 00:27:44.424 "adrfam": "IPv4", 00:27:44.425 "traddr": "127.0.0.1", 00:27:44.425 "trsvcid": "4420", 00:27:44.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.425 "prchk_reftag": false, 00:27:44.425 "prchk_guard": false, 00:27:44.425 "ctrlr_loss_timeout_sec": 0, 00:27:44.425 "reconnect_delay_sec": 0, 00:27:44.425 "fast_io_fail_timeout_sec": 0, 00:27:44.425 "psk": "key0", 00:27:44.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.425 "hdgst": false, 00:27:44.425 "ddgst": false 00:27:44.425 } 00:27:44.425 }, 00:27:44.425 { 00:27:44.425 "method": "bdev_nvme_set_hotplug", 00:27:44.425 "params": { 00:27:44.425 "period_us": 100000, 00:27:44.425 "enable": false 00:27:44.425 } 00:27:44.425 }, 00:27:44.425 { 00:27:44.425 "method": "bdev_wait_for_examine" 00:27:44.425 } 00:27:44.425 ] 00:27:44.425 }, 00:27:44.425 { 00:27:44.425 "subsystem": "nbd", 00:27:44.425 "config": [] 00:27:44.425 } 00:27:44.425 ] 00:27:44.425 }' 00:27:44.425 17:16:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.425 17:16:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:44.683 [2024-07-12 17:16:44.122299] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:27:44.683 [2024-07-12 17:16:44.122389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257501 ] 00:27:44.683 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.683 [2024-07-12 17:16:44.178000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.683 [2024-07-12 17:16:44.283568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.959 [2024-07-12 17:16:44.471458] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:45.598 17:16:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.598 17:16:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:45.598 17:16:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:45.598 17:16:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:45.598 17:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.856 17:16:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:45.856 17:16:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:45.856 17:16:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:45.856 17:16:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:45.856 17:16:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.856 17:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.856 17:16:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:46.114 17:16:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:46.114 17:16:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:46.114 17:16:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:46.114 17:16:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.114 17:16:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.114 17:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.114 17:16:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:46.373 17:16:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:46.373 17:16:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:46.373 17:16:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:46.373 17:16:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:46.373 17:16:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:46.373 17:16:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:46.373 17:16:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZnP7dh94p7 /tmp/tmp.y8ZO252Pyz 00:27:46.373 17:16:46 keyring_file -- keyring/file.sh@20 -- # killprocess 1257501 00:27:46.373 17:16:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1257501 ']' 00:27:46.373 17:16:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1257501 00:27:46.373 17:16:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1257501 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1257501' 00:27:46.631 killing process with pid 1257501 00:27:46.631 17:16:46 keyring_file -- common/autotest_common.sh@967 -- # kill 1257501 00:27:46.631 Received shutdown signal, test time was about 1.000000 seconds 00:27:46.631 00:27:46.631 Latency(us) 00:27:46.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.632 =================================================================================================================== 00:27:46.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:46.632 17:16:46 keyring_file -- common/autotest_common.sh@972 -- # wait 1257501 00:27:46.889 17:16:46 keyring_file -- keyring/file.sh@21 -- # killprocess 1256032 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1256032 ']' 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1256032 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1256032 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1256032' 00:27:46.889 killing process with pid 1256032 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@967 -- # kill 1256032 00:27:46.889 [2024-07-12 17:16:46.387939] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:46.889 17:16:46 keyring_file -- common/autotest_common.sh@972 -- # wait 1256032 00:27:47.147 00:27:47.147 real 0m14.206s 00:27:47.147 user 0m35.354s 00:27:47.147 sys 0m3.324s 00:27:47.147 17:16:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.147 17:16:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:47.147 ************************************ 00:27:47.147 END TEST keyring_file 00:27:47.147 ************************************ 00:27:47.405 17:16:46 -- common/autotest_common.sh@1142 -- # return 0 00:27:47.405 17:16:46 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:47.405 17:16:46 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:47.405 17:16:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:47.405 17:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.405 17:16:46 -- common/autotest_common.sh@10 -- # set +x 00:27:47.405 ************************************ 00:27:47.405 START TEST keyring_linux 00:27:47.405 ************************************ 00:27:47.405 17:16:46 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:47.405 * Looking for test storage... 00:27:47.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.405 17:16:46 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.405 17:16:46 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.405 17:16:46 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.405 17:16:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.405 17:16:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.405 17:16:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.405 17:16:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:47.405 17:16:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:47.405 17:16:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:47.405 /tmp/:spdk-test:key0 00:27:47.405 17:16:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:47.405 17:16:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:47.406 17:16:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:47.406 17:16:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:47.406 17:16:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:47.406 17:16:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:47.406 17:16:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:47.406 17:16:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:47.406 17:16:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:47.406 17:16:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:47.406 /tmp/:spdk-test:key1 00:27:47.406 17:16:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1257870 00:27:47.406 17:16:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:47.406 17:16:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1257870 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1257870 ']' 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.406 17:16:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.406 [2024-07-12 17:16:47.088820] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:27:47.406 [2024-07-12 17:16:47.088904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257870 ] 00:27:47.663 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.663 [2024-07-12 17:16:47.149354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.663 [2024-07-12 17:16:47.268336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.921 [2024-07-12 17:16:47.536193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.921 null0 00:27:47.921 [2024-07-12 17:16:47.568259] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:47.921 [2024-07-12 17:16:47.568768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:47.921 336586485 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:47.921 743120757 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1257994 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1257994 /var/tmp/bperf.sock 00:27:47.921 17:16:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1257994 ']' 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.921 17:16:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.179 [2024-07-12 17:16:47.634938] Starting SPDK v24.09-pre git sha1 d4b4edb37 / DPDK 24.03.0 initialization... 00:27:48.179 [2024-07-12 17:16:47.635008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257994 ] 00:27:48.179 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.179 [2024-07-12 17:16:47.692202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.179 [2024-07-12 17:16:47.803192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.179 17:16:47 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.179 17:16:47 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:48.179 17:16:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:48.179 17:16:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:48.436 17:16:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:48.437 17:16:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:49.001 17:16:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:49.001 17:16:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:49.001 [2024-07-12 17:16:48.637948] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:49.259 nvme0n1 00:27:49.259 17:16:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:49.259 17:16:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:49.259 17:16:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:49.259 17:16:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:49.259 17:16:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:49.259 17:16:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.516 17:16:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:49.516 17:16:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:49.516 17:16:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:49.516 17:16:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:49.516 17:16:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:49.516 17:16:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:49.516 17:16:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:49.773 17:16:49 keyring_linux -- keyring/linux.sh@25 -- # sn=336586485 00:27:49.773 17:16:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:49.773 17:16:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:49.773 17:16:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 336586485 == \3\3\6\5\8\6\4\8\5 ]] 00:27:49.773 17:16:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 336586485 00:27:49.774 17:16:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:49.774 17:16:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.774 Running I/O for 1 seconds... 00:27:50.706 00:27:50.706 Latency(us) 00:27:50.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.706 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:50.706 nvme0n1 : 1.01 10252.49 40.05 0.00 0.00 12400.53 5873.97 19029.71 00:27:50.706 =================================================================================================================== 00:27:50.706 Total : 10252.49 40.05 0.00 0.00 12400.53 5873.97 19029.71 00:27:50.706 0 00:27:50.706 17:16:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:50.706 17:16:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:50.963 17:16:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:50.963 17:16:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:50.963 17:16:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:50.963 17:16:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:50.963 17:16:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:50.963 17:16:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.221 17:16:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:51.221 17:16:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:51.221 17:16:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:51.221 17:16:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.221 17:16:50 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.221 17:16:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:51.478 [2024-07-12 17:16:51.102555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:51.478 [2024-07-12 17:16:51.103200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2108780 (107): Transport endpoint is not connected 00:27:51.478 [2024-07-12 17:16:51.104188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2108780 (9): Bad file descriptor 00:27:51.478 [2024-07-12 17:16:51.105187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:51.478 [2024-07-12 17:16:51.105205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:51.478 [2024-07-12 17:16:51.105234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:51.478 request: 00:27:51.478 { 00:27:51.478 "name": "nvme0", 00:27:51.478 "trtype": "tcp", 00:27:51.478 "traddr": "127.0.0.1", 00:27:51.478 "adrfam": "ipv4", 00:27:51.478 "trsvcid": "4420", 00:27:51.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.478 "prchk_reftag": false, 00:27:51.478 "prchk_guard": false, 00:27:51.478 "hdgst": false, 00:27:51.478 "ddgst": false, 00:27:51.478 "psk": ":spdk-test:key1", 00:27:51.478 "method": "bdev_nvme_attach_controller", 00:27:51.478 "req_id": 1 00:27:51.478 } 00:27:51.478 Got JSON-RPC error response 00:27:51.478 response: 00:27:51.478 { 00:27:51.478 "code": -5, 00:27:51.478 "message": "Input/output error" 00:27:51.478 } 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@33 -- # sn=336586485 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 336586485 00:27:51.478 1 links removed 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@33 -- # sn=743120757 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 743120757 00:27:51.478 1 links removed 00:27:51.478 17:16:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1257994 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1257994 ']' 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1257994 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1257994 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1257994' 00:27:51.478 killing process with pid 1257994 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 1257994 00:27:51.478 Received shutdown signal, test time was about 1.000000 seconds 00:27:51.478 00:27:51.478 Latency(us) 00:27:51.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.478 =================================================================================================================== 00:27:51.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.478 17:16:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 1257994 00:27:51.735 17:16:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1257870 00:27:51.735 17:16:51 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1257870 ']' 00:27:51.735 17:16:51 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1257870 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1257870 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1257870' 00:27:51.992 killing process with pid 1257870 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@967 -- # kill 1257870 00:27:51.992 17:16:51 keyring_linux -- common/autotest_common.sh@972 -- # wait 1257870 00:27:52.249 00:27:52.249 real 0m5.003s 00:27:52.249 user 0m9.589s 00:27:52.249 sys 0m1.657s 00:27:52.249 17:16:51 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.249 17:16:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:52.249 ************************************ 00:27:52.249 END TEST keyring_linux 00:27:52.249 ************************************ 00:27:52.249 17:16:51 -- common/autotest_common.sh@1142 -- # return 0 00:27:52.249 17:16:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:52.249 17:16:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:52.249 17:16:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:52.249 17:16:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:52.249 17:16:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:52.249 17:16:51 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:52.249 17:16:51 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:52.249 17:16:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.249 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:27:52.249 17:16:51 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:52.249 17:16:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:52.249 17:16:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:52.250 17:16:51 -- common/autotest_common.sh@10 -- # set +x 00:27:54.150 INFO: APP EXITING 00:27:54.150 INFO: killing all VMs 00:27:54.150 INFO: killing vhost app 00:27:54.150 INFO: EXIT DONE 00:27:55.523 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:27:55.523 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.523 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.523 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.523 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.523 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.523 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.523 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.523 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:55.523 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:55.523 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:55.523 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:55.523 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:55.523 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:55.523 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:55.523 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:55.523 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:56.897 Cleaning 00:27:56.897 Removing: /var/run/dpdk/spdk0/config 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:56.897 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:56.897 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:56.897 Removing: /var/run/dpdk/spdk1/config 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:56.897 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:56.897 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:56.897 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:56.897 Removing: /var/run/dpdk/spdk2/config 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:56.897 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:56.897 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:56.897 Removing: /var/run/dpdk/spdk3/config 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:56.897 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:56.897 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:56.897 Removing: /var/run/dpdk/spdk4/config 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:56.897 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:56.897 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:56.897 Removing: /dev/shm/bdev_svc_trace.1 00:27:56.897 Removing: /dev/shm/nvmf_trace.0 00:27:56.897 Removing: /dev/shm/spdk_tgt_trace.pid998351 00:27:56.897 Removing: /var/run/dpdk/spdk0 00:27:56.897 Removing: /var/run/dpdk/spdk1 00:27:56.897 Removing: /var/run/dpdk/spdk2 00:27:56.897 Removing: /var/run/dpdk/spdk3 00:27:56.897 Removing: /var/run/dpdk/spdk4 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1000347 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1000363 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1000610 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1001918 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1002838 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1003148 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1003334 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1003540 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1003728 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1003890 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1004046 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1004275 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1004542 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1006955 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1007169 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1007354 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1007464 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1007823 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1007994 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1008714 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1008838 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009012 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009138 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009300 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009314 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009715 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1009953 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010154 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010324 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010351 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010531 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010689 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1010851 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1011117 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1011281 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1011440 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1011714 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1011869 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1012031 00:27:56.897 Removing: /var/run/dpdk/spdk_pid1012291 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1012460 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1012616 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1012781 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013046 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013203 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013366 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013634 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013794 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1013957 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1014232 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1014386 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1014515 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1014733 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1016733 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1042466 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1045207 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1052708 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1056012 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1058288 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1058794 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1062665 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1066517 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1066520 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1067176 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1067732 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1068381 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1068777 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1068906 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1069047 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1069174 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1069184 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1069839 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1070382 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1071036 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1071446 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1071449 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1071709 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1072589 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1073314 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1079308 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1079586 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1082231 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1085953 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1088004 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1094418 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1099656 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1100910 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1101612 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1112008 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1114745 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1139422 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1142220 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1143351 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1144597 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1144739 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1144874 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1145014 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1145448 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1146647 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1147369 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1147795 00:27:57.154 Removing: /var/run/dpdk/spdk_pid1149413 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1149839 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1150279 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1152817 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1158906 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1161684 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1165471 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1166420 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1167620 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1170678 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1173060 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1177300 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1177417 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1180208 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1180344 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1180486 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1180747 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1180869 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1183523 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1183864 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1186528 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1188505 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1191935 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1195419 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1201914 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1206510 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1206515 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1219284 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1219734 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1220239 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1220644 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1221228 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1221634 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1222044 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1222538 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1224958 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1225222 00:27:57.155 Removing: /var/run/dpdk/spdk_pid1229027 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1229214 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1230820 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1235875 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1235880 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1238800 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1240320 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1242229 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1242969 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1244374 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1245253 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1250651 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1250939 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1251337 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1252897 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1253290 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1253573 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1256032 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1256044 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1257501 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1257870 00:27:57.412 Removing: /var/run/dpdk/spdk_pid1257994 00:27:57.412 Removing: /var/run/dpdk/spdk_pid996801 00:27:57.412 Removing: /var/run/dpdk/spdk_pid997541 00:27:57.412 Removing: /var/run/dpdk/spdk_pid998351 00:27:57.412 Removing: /var/run/dpdk/spdk_pid998790 00:27:57.412 Removing: /var/run/dpdk/spdk_pid999493 00:27:57.412 Removing: /var/run/dpdk/spdk_pid999631 00:27:57.412 Clean 00:27:57.412 17:16:56 -- common/autotest_common.sh@1451 -- # return 0 00:27:57.412 17:16:56 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:57.412 17:16:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.412 17:16:56 -- common/autotest_common.sh@10 -- # set +x 00:27:57.412 17:16:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:57.412 17:16:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.412 17:16:57 -- common/autotest_common.sh@10 -- # set +x 00:27:57.412 17:16:57 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:57.412 17:16:57 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:57.412 17:16:57 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:57.412 17:16:57 -- spdk/autotest.sh@391 -- # hash lcov 00:27:57.412 17:16:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:57.412 17:16:57 -- spdk/autotest.sh@393 -- # hostname 00:27:57.412 17:16:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:57.668 geninfo: WARNING: invalid characters removed from testname! 00:28:29.715 17:17:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:29.715 17:17:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:32.990 17:17:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:35.519 17:17:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:38.853 17:17:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:41.378 17:17:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:44.657 17:17:43 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:44.657 17:17:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.657 17:17:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:44.657 17:17:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.658 17:17:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.658 17:17:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.658 17:17:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.658 17:17:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.658 17:17:43 -- paths/export.sh@5 -- $ export PATH 00:28:44.658 17:17:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.658 17:17:43 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:44.658 17:17:43 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:44.658 17:17:43 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720797463.XXXXXX 00:28:44.658 17:17:43 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720797463.Q5F6r0 00:28:44.658 17:17:43 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:44.658 17:17:43 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:44.658 17:17:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:44.658 17:17:43 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:44.658 17:17:43 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:44.658 17:17:43 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:44.658 17:17:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:44.658 17:17:43 -- common/autotest_common.sh@10 -- $ set +x 00:28:44.658 17:17:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:44.658 17:17:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:44.658 17:17:44 -- pm/common@17 -- $ local monitor 00:28:44.658 17:17:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.658 17:17:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.658 17:17:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.658 17:17:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.658 17:17:44 -- pm/common@21 -- $ date +%s 00:28:44.658 17:17:44 -- pm/common@21 -- $ date +%s 00:28:44.658 17:17:44 -- pm/common@25 -- $ sleep 1 00:28:44.658 17:17:44 -- pm/common@21 -- $ date +%s 00:28:44.658 17:17:44 -- pm/common@21 -- $ date +%s 00:28:44.658 17:17:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720797464 00:28:44.658 17:17:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720797464 00:28:44.658 17:17:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720797464 00:28:44.658 17:17:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720797464 00:28:44.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720797464_collect-vmstat.pm.log 00:28:44.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720797464_collect-cpu-load.pm.log 00:28:44.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720797464_collect-cpu-temp.pm.log 00:28:44.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720797464_collect-bmc-pm.bmc.pm.log 00:28:45.598 17:17:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:45.598 17:17:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:45.598 17:17:45 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:45.598 17:17:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:45.598 17:17:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:45.598 17:17:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:45.598 17:17:45 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:45.598 17:17:45 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:45.598 17:17:45 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:45.598 17:17:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:45.598 17:17:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:45.598 17:17:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:45.598 17:17:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:45.598 17:17:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.598 17:17:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:45.598 17:17:45 -- pm/common@44 -- $ pid=1267609 00:28:45.598 17:17:45 -- pm/common@50 -- $ kill -TERM 1267609 00:28:45.598 17:17:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.598 17:17:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:45.598 17:17:45 -- pm/common@44 -- $ pid=1267611 00:28:45.598 17:17:45 -- pm/common@50 -- $ kill -TERM 1267611 00:28:45.598 17:17:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.598 17:17:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:45.598 17:17:45 -- pm/common@44 -- $ pid=1267613 00:28:45.598 17:17:45 -- pm/common@50 -- $ kill -TERM 1267613 00:28:45.598 17:17:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.598 17:17:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:45.598 17:17:45 -- pm/common@44 -- $ pid=1267638 00:28:45.598 17:17:45 -- pm/common@50 -- $ sudo -E kill -TERM 1267638 00:28:45.598 + [[ -n 912893 ]] 00:28:45.598 + sudo kill 912893 00:28:45.609 [Pipeline] } 00:28:45.627 [Pipeline] // stage 00:28:45.633 [Pipeline] } 00:28:45.650 [Pipeline] // timeout 00:28:45.656 [Pipeline] } 00:28:45.673 [Pipeline] // catchError 00:28:45.678 [Pipeline] } 00:28:45.697 [Pipeline] // wrap 00:28:45.701 [Pipeline] } 00:28:45.712 [Pipeline] // catchError 00:28:45.719 [Pipeline] stage 00:28:45.721 [Pipeline] { (Epilogue) 00:28:45.732 [Pipeline] catchError 00:28:45.734 [Pipeline] { 00:28:45.745 [Pipeline] echo 00:28:45.747 Cleanup processes 00:28:45.753 [Pipeline] sh 00:28:46.040 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:46.040 1267741 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:46.040 1267874 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:46.056 [Pipeline] sh 00:28:46.345 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:46.345 ++ grep -v 'sudo pgrep' 00:28:46.345 ++ awk '{print $1}' 00:28:46.345 + sudo kill -9 1267741 00:28:46.358 [Pipeline] sh 00:28:46.645 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:54.806 [Pipeline] sh 00:28:55.093 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:55.093 Artifacts sizes are good 00:28:55.107 [Pipeline] archiveArtifacts 00:28:55.113 Archiving artifacts 00:28:55.349 [Pipeline] sh 00:28:55.658 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:55.671 [Pipeline] cleanWs 00:28:55.680 [WS-CLEANUP] Deleting project workspace... 00:28:55.680 [WS-CLEANUP] Deferred wipeout is used... 00:28:55.687 [WS-CLEANUP] done 00:28:55.688 [Pipeline] } 00:28:55.706 [Pipeline] // catchError 00:28:55.719 [Pipeline] sh 00:28:56.001 + logger -p user.info -t JENKINS-CI 00:28:56.008 [Pipeline] } 00:28:56.024 [Pipeline] // stage 00:28:56.029 [Pipeline] } 00:28:56.046 [Pipeline] // node 00:28:56.053 [Pipeline] End of Pipeline 00:28:56.194 Finished: SUCCESS